Multi-Glimpse Network With Python

Related tags

Deep LearningMGNet
Overview

Multi-Glimpse Network

Our code requires Python ≥ 3.8

Installation

For example, venv + pip:

$ python3 -m venv env
$ source env/bin/activate
(env) $ python3 -m pip install -r requirements.txt

Evaluation

Accuracy on clean images

  1. Create ImageNet100 from ImageNet (using symbolic links).
$ python3 tools/create_imagenet100.py tools/imagenet100.txt \
    /path/to/ImageNet /path/to/ImageNet100
  1. Download checkpoints from Google Drive.

  2. Test accuracy.

$ export dataset="--train_dir /path/to/ImageNet100/train \
    --val_dir /path/to/ImageNet100/val \
    --dataset imagenet --num_class 100"
# Baseline
$ python3 main.py $dataset --test --n_iter 1 --scale 1.0  --model resnet18 \
    --checkpoint resnet18_baseline
# Ours
$ python3 main.py $dataset --test --n_iter 4 --scale 2.33 --model resnet18 \
    --checkpoint resnet18_ours --alpha 0.6 --s 0.02

Add the flag --flop_count to count the approximate FLOPs for the inference of an image. (using fvcore)

Accuracy on adversarial attacks (PGD)

  1. Test adversarial accuracy.
# Baseline
$ python3 main.py $dataset --test --n_iter 1 --scale 1.0  --adv --step_k 10 \
    --model resnet18 --checkpoint resnet18_baseline
# Ours
$ python3 main.py $dataset --test --n_iter 4 --scale 2.33 --adv --step_k 10 \
    --model resnet18 --checkpoint resnet18_ours --alpha 0.6 --s 0.02

Accuracy on common corruptions

  1. Create ImageNet100-C from ImageNet-C (using symbolic links).
$ python3 tools/create_imagenet100c.py  \
    tools/imagenet100.txt  /path/to/ImageNet-C/ /path/to/ImageNet100-C/
  1. Test for a single corruption.
$ export dataset="--train_dir /path/to/ImageNet100/train \
    --val_dir /path/to/ImageNet100-C/pixelate/5 \
    --dataset imagenet --num_class 100"
# Baseline
$ python3 main.py $dataset --test --n_iter 1 --scale 1.0  --model resnet18 \
    --checkpoint resnet18_baseline
# Ours
$ python3 main.py $dataset --test --n_iter 4 --scale 2.33 --model resnet18 \
    --checkpoint resnet18_ours --alpha 0.6 --s 0.02
  1. A simple script to test all corruptions and collect results.
# Modify tools/eval_imagenet100c.py and run it to generate script
$ python3 tools/eval_imagenet100c.py /home2/ImageNet100-C/ > run.sh
# Evaluate
$ bash run.sh
# Collect results
$ python3 tools/collect_imagenet100c.py

Training

$ export dataset="--train_dir /path/to/ImageNet100/train \
    --val_dir /path/to/ImageNet100/val \
    --dataset imagenet --num_class 100"
# Baseline
$ python3 main.py $dataset --epochs 400 --n_iter 1 --scale 1.0 \
    --model resnet18 --gpu 0,1,2,3
# Ours
$ python3 main.py $dataset --epochs 400 --n_iter 4 --scale 2.33 \
    --model resnet18 --alpha 0.6 --s 0.02  --gpu 0,1,2,3

Check tensorboard for the logs. (When training with multiple gpus, the log value may be scaled by the number of gpus except for the validation accuracy)

tensorboard  --logdir=logs

Note that we left our exploration in the code for further study, e.g., self-supervised spatial guidance, dynamic gradient re-scaling operation.

Owner
LInkedIn https://www.linkedin.com/in/sia-huat-tan-2bb6911a5/
Self-Adaptable Point Processes with Nonparametric Time Decays

NPPDecay This is our implementation for the paper Self-Adaptable Point Processes with Nonparametric Time Decays, by Zhimeng Pan, Zheng Wang, Jeff M. P

zpan 2 Sep 24, 2022
Multi-modal co-attention for drug-target interaction annotation and Its Application to SARS-CoV-2

CoaDTI Multi-modal co-attention for drug-target interaction annotation and Its Application to SARS-CoV-2 Abstract Environment The test was conducted i

Layne_Huang 7 Nov 14, 2022
Neuralnetwork - Basic Multilayer Perceptron Neural Network for deep learning

Neural Network Just a basic Neural Network module Usage Example Importing Module

andreecy 0 Nov 01, 2022
Official implementation of the paper ``Unifying Nonlocal Blocks for Neural Networks'' (ICCV'21)

Spectral Nonlocal Block Overview Official implementation of the paper: Unifying Nonlocal Blocks for Neural Networks (ICCV'21) Spectral View of Nonloca

91 Dec 14, 2022
Datasets, Transforms and Models specific to Computer Vision

vision Datasets, Transforms and Models specific to Computer Vision Installation First install the nightly version of OneFlow python3 -m pip install on

OneFlow 68 Dec 07, 2022
Learning with Subset Stacking

Learning with Subset Stacking (LESS) LESS is a new supervised learning algorithm that is based on training many local estimators on subsets of a given

S. Ilker Birbil 19 Oct 04, 2022
Codebase to experiment with a hybrid Transformer that combines conditional sequence generation with regression

Regression Transformer Codebase to experiment with a hybrid Transformer that combines conditional sequence generation with regression . Development se

International Business Machines 27 Jan 05, 2023
PyTorch Implementation of Backbone of PicoDet

PicoDet-Backbone PyTorch Implementation of Backbone of PicoDet Original Implementation is implemented on PaddlePaddle. Example picodet_l_backbone = ES

Yonghye Kwon 7 Jul 12, 2022
This is a code repository for paper OODformer: Out-Of-Distribution Detection Transformer

OODformer: Out-Of-Distribution Detection Transformer This repo is the official the implementation of the OODformer: Out-Of-Distribution Detection Tran

34 Dec 02, 2022
On-device wake word detection powered by deep learning.

Porcupine Made in Vancouver, Canada by Picovoice Porcupine is a highly-accurate and lightweight wake word engine. It enables building always-listening

Picovoice 2.8k Dec 29, 2022
Data, model training, and evaluation code for "PubTables-1M: Towards a universal dataset and metrics for training and evaluating table extraction models".

PubTables-1M This repository contains training and evaluation code for the paper "PubTables-1M: Towards a universal dataset and metrics for training a

Microsoft 365 Jan 04, 2023
Fast and Simple Neural Vocoder, the Multiband RNNMS

Multiband RNN_MS Fast and Simple vocoder, Multiband RNN_MS. Demo Quick training How to Use System Details Results References Demo ToDO: Link super gre

tarepan 5 Jan 11, 2022
This repo is about to create the Streamlit application for given ML model.

HR-Attritiion-using-Streamlit This repo is about to create the Streamlit application for given ML model. Problem Statement: Managing peoples at workpl

Pavan Giri 0 Dec 10, 2021
Hyperparameter tuning for humans

KerasTuner KerasTuner is an easy-to-use, scalable hyperparameter optimization framework that solves the pain points of hyperparameter search. Easily c

Keras 2.6k Dec 27, 2022
GPU Accelerated Non-rigid ICP for surface registration

GPU Accelerated Non-rigid ICP for surface registration Introduction Preivous Non-rigid ICP algorithm is usually implemented on CPU, and needs to solve

Haozhe Wu 144 Jan 04, 2023
Setup freqtrade/freqUI on Heroku

UNMAINTAINED - REPO MOVED TO https://github.com/p-zombie/freqtrade Creating the app git clone https://github.com/joaorafaelm/freqtrade.git && cd freqt

João 51 Aug 29, 2022
Pytorch implementation of Integrating Tree Path in Transformer for Code Representation

This is an official Pytorch implementation of the approaches proposed in: Han Peng, Ge Li, Wenhan Wang, Yunfei Zhao, Zhi Jin “Integrating Tree Path in

Han Peng 16 Dec 23, 2022
Official code for the ICCV 2021 paper "DECA: Deep viewpoint-Equivariant human pose estimation using Capsule Autoencoders"

DECA Official code for the ICCV 2021 paper "DECA: Deep viewpoint-Equivariant human pose estimation using Capsule Autoencoders". All the code is writte

23 Dec 01, 2022
PyTorch implementation DRO: Deep Recurrent Optimizer for Structure-from-Motion

DRO: Deep Recurrent Optimizer for Structure-from-Motion This is the official PyTorch implementation code for DRO-sfm. For technical details, please re

Alibaba Cloud 56 Dec 12, 2022
Edge-oriented Convolution Block for Real-time Super Resolution on Mobile Devices, ACM Multimedia 2021

Codes for ECBSR Edge-oriented Convolution Block for Real-time Super Resolution on Mobile Devices Xindong Zhang, Hui Zeng, Lei Zhang ACM Multimedia 202

xindong zhang 236 Dec 26, 2022