code for generating data set ES-ImageNet with corresponding training code

Overview

es-imagenet-master

image

code for generating data set ES-ImageNet with corresponding training code

dataset generator

  • some codes of ODG algorithm
  • The variables to be modified include datapath (data storage path after transformation, which needs to be created before transformation) and root_Path (root directory of training set before transformation)
file name function
traconvert.py converting training set of ISLVRC 2012 into event stream using ODG
trainlabel_dir.txt It stores the corresponding relationship between the class name and label of the original Imagenet file
trainlabel.txt It is generated during transformation and stores the label of training set
valconvert.py Transformation code for test set.
valorigin.txt Original test label, need and valconvert.py Put it in the same folder
vallabel.txt It is generated during transformation and stores the label of training set.

dataset usage

  • codes are in ./datasets
  • some traing examples are provided for ES-imagenet in ./example An example code for easily using this dataset based on Pytorch
from __future__ import print_function
import sys
sys.path.append("..")
from datasets.es_imagenet_new import ESImagenet_Dataset
import torch.nn as nn
import torch

data_path = None #TODO:modify 
train_dataset = ESImagenet_Dataset(mode='train',data_set_path=data_path)
test_dataset = ESImagenet_Dataset(mode='test',data_set_path=data_path)

train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
test_sampler  = torch.utils.data.distributed.DistributedSampler(test_dataset)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=False, num_workers=1,pin_memory=True,drop_last=True,sampler=train_sampler)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False, num_workers=1,pin_memory=True)

for batch_idx, (inputs, targets) in enumerate(train_loader)
  pass
  # input = [batchsize,time,channel,width,height]
  
for batch_idx, (inputs, targets) in enumerate(test_loader):
  pass
  # input = [batchsize,time,channel,width,height]

training example and benchmarks

Requirements

  • Python >= 3.5
  • Pytorch >= 1.7
  • CUDA >=10.0
  • TenosrBoradX(optional)

Train the baseline models

$ cd example

$ CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 example_ES_res18.py #LIAF/LIF-ResNet-18
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 example_ES_res34.py #LIAF/LIF-ResNet-34
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 compare_ES_3DCNN34.py #3DCNN-ResNet-34
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 compare_ES_3DCNN18.py #3DCNN-ResNet-18
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 compare_ES_2DCNN34.py #2DCNN-ResNet-34 
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 compare_ES_2DCNN18.py #2DCNN-ResNet-18
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 compare_CONVLSTM.py #ConvLSTM (no used in paper)
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 example_ES_res50.py #LIAF/LIF-ResNet-50 (no used in paper)

** note:** To select LIF mode, change the config files under /LIAFnet : self.actFun= torch.nn.LeakyReLU(0.2, inplace=False) #nexttest:selu to self.actFun= LIAF.LIFactFun.apply

baseline / Benchmark

Network layer Type Test Acc/% # of Para FP32+/GFLOPs FP32x/GFLOPs
ResNet18 2D-CNN 41.030 11.68M 1.575 1.770
ResNet18 3D-CNN 38.050 28.56M 12.082 12.493
ResNet18 LIF 39.894 11.69M 12.668 0.269
ResNet18 LIAF 42.544 11.69M 12.668 14.159
ResNet34 2D-CNN 42.736 21.79M 3.211 3.611
ResNet34 3D-CNN 39.410 48.22M 20.671 21.411
ResNet34 LIF 43.424 21.80M 25.783 0.288
ResNet18+imagenet-pretrain (a) LIF 43.74 11.69M 12.668 0.269
ResNet34 LIAF 47.466 21.80M 25.783 28.901
ResNet18+self-pretrain LIAF 50.54 11.69M 12.668 14.159
ResNet18+imagenet-pretrain (b) LIAF 52.25 11.69M 12.668 14.159
ResNet34+imagenet-pretrain (c) LIAF 51.83 21.80M 25.783 28.901

Note: model (a), (b) and (c) are stored in ./pretrained_model

Download

  • The datasets ES-ImageNet (100GB) for this study can be download in the Tsinghua Cloud or Openl

  • The converted event-frame version (40GB) can be found in Tsinghua Cloud

Citation

If you use this for research, please cite. Here is an example BibTeX entry:

@misc{lin2021esimagenet,
    title={ES-ImageNet: A Million Event-Stream Classification Dataset for Spiking Neural Networks},
    author={Yihan Lin and Wei Ding and Shaohua Qiang and Lei Deng and Guoqi Li},
    year={2021},
    eprint={2110.12211},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
You might also like...
Code for the paper
Code for the paper "A Study of Face Obfuscation in ImageNet"

A Study of Face Obfuscation in ImageNet Code for the paper: A Study of Face Obfuscation in ImageNet Kaiyu Yang, Jacqueline Yau, Li Fei-Fei, Jia Deng,

Code for Active Learning at The ImageNet Scale.

Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training with torch's DDP.

[ICLR 2021] "Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective" by Wuyang Chen, Xinyu Gong, Zhangyang Wang

Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective [PDF] Wuyang Chen, Xinyu Gong, Zhangyang Wang In ICLR 2

A PyTorch re-implementation of the paper 'Exploring Simple Siamese Representation Learning'. Reproduced the 67.8% Top1 Acc on ImageNet.

Exploring simple siamese representation learning This is a PyTorch re-implementation of the SimSiam paper on ImageNet dataset. The results match that

(ImageNet pretrained models) The official pytorch implemention of the TPAMI paper
(ImageNet pretrained models) The official pytorch implemention of the TPAMI paper "Res2Net: A New Multi-scale Backbone Architecture"

Res2Net The official pytorch implemention of the paper "Res2Net: A New Multi-scale Backbone Architecture" Our paper is accepted by IEEE Transactions o

Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet
Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet

Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet, CVPR2021 安全AI挑战者计划第六期:ImageNet无限制对抗攻击 决赛第四名(team name: Advers)

transfer attack; adversarial examples; black-box attack; unrestricted Adversarial Attacks on ImageNet; CVPR2021 天池黑盒竞赛
transfer attack; adversarial examples; black-box attack; unrestricted Adversarial Attacks on ImageNet; CVPR2021 天池黑盒竞赛

transfer_adv CVPR-2021 AIC-VI: unrestricted Adversarial Attacks on ImageNet CVPR2021 安全AI挑战者计划第六期赛道2:ImageNet无限制对抗攻击 介绍 : 深度神经网络已经在各种视觉识别问题上取得了最先进的性能。

Official Pytorch Implementation of:
Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(2021) paper

ImageNet-21K Pretraining for the Masses Paper | Pretrained models Official PyTorch Implementation Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, Lihi Zelni

A small demonstration of using WebDataset with ImageNet and PyTorch Lightning

A small demonstration of using WebDataset with ImageNet and PyTorch Lightning

Comments
  • Cannot find validation dataset

    Cannot find validation dataset

    Hello,

    Thanks for the open-sourced code. However, I had trouble finding the validation set. I directly download the frame set in your cloud server. However, I direct uncompress the file and I didn't find the validation dataset. Also, your dataset_generator/vallabel.txt is empty. How can I find the validation index file and the dataset?

    Thanks.

    opened by yhhhli 4
Releases(1.1.0)
Owner
Ordinarabbit
Phd student of CBICR, Tsinghua University
Ordinarabbit
Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark (ICCV 2021)

Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark (ICCV 2021) Kun Wang, Zhenyu Zhang, Zhiqiang Yan, X

kunwang 66 Nov 24, 2022
Keras implementation of Deeplab v3+ with pretrained weights

Keras implementation of Deeplabv3+ This repo is not longer maintained. I won't respond to issues but will merge PR DeepLab is a state-of-art deep lear

1.3k Dec 07, 2022
Implementation of FSGNN

FSGNN Implementation of FSGNN. For more details, please refer to our paper Experiments were conducted with following setup: Pytorch: 1.6.0 Python: 3.8

19 Dec 05, 2022
Jremesh-tools - Blender addon for quad remeshing

JRemesh Tools Blender 2.8 - 3.x addon for quad remeshing. Currently it is a wrap

Jayanam 89 Dec 30, 2022
TAug :: Time Series Data Augmentation using Deep Generative Models

TAug :: Time Series Data Augmentation using Deep Generative Models Note!!! The package is under development so be careful for using in production! Fea

35 Dec 06, 2022
一些经典的CTR算法的复现; LR, FM, FFM, AFM, DeepFM,xDeepFM, PNN, DCN, DCNv2, DIFM, AutoInt, FiBiNet,AFN,ONN,DIN, DIEN ... (pytorch, tf2.0)

CTR Algorithm 根据论文, 博客, 知乎等方式学习一些CTR相关的算法 理解原理并自己动手来实现一遍 pytorch & tf2.0 保持一颗学徒的心! Schedule Model pytorch tensorflow2.0 paper LR ✔️ ✔️ \ FM ✔️ ✔️ Fac

luo han 149 Dec 20, 2022
Discovering Dynamic Salient Regions with Spatio-Temporal Graph Neural Networks

Discovering Dynamic Salient Regions with Spatio-Temporal Graph Neural Networks This is the official code for DyReg model inroduced in Discovering Dyna

Bitdefender Machine Learning 11 Nov 08, 2022
Alphabetical Letter Recognition

DecisionTrees-Image-Classification Alphabetical Letter Recognition In these demo we are using "Decision Trees" Our database is composed by Learning Im

Mohammed Firass 4 Nov 30, 2021
Unofficial PyTorch implementation of MobileViT based on paper "MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer".

MobileViT RegNet Unofficial PyTorch implementation of MobileViT based on paper MOBILEVIT: LIGHT-WEIGHT, GENERAL-PURPOSE, AND MOBILE-FRIENDLY VISION TR

Hong-Jia Chen 91 Dec 02, 2022
Elegy is a framework-agnostic Trainer interface for the Jax ecosystem.

Elegy Elegy is a framework-agnostic Trainer interface for the Jax ecosystem. Main Features Easy-to-use: Elegy provides a Keras-like high-level API tha

435 Dec 30, 2022
PointCloud Annotation Tools, support to label object bound box, ground, lane and kerb

PointCloud Annotation Tools, support to label object bound box, ground, lane and kerb

halo 368 Dec 06, 2022
A medical imaging framework for Pytorch

Welcome to MedicalTorch MedicalTorch is an open-source framework for PyTorch, implementing an extensive set of loaders, pre-processors and datasets fo

Christian S. Perone 799 Jan 03, 2023
This is RFA-Toolbox, a simple and easy-to-use library that allows you to optimize your neural network architectures using receptive field analysis (RFA) and create graph visualizations of your architecture.

ReceptiveFieldAnalysisToolbox This is RFA-Toolbox, a simple and easy-to-use library that allows you to optimize your neural network architectures usin

84 Nov 23, 2022
A small library for doing fluid simulation with neural networks.

Neural Fluid Fields This is a small library for doing fluid simulation with neural fields. Check out our review paper, Neural Fields in Visual Computi

Towaki 23 Jun 23, 2022
ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

Ibai Gorordo 18 Nov 06, 2022
OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark

Introduction English | 简体中文 MMAction2 is an open-source toolbox for video understanding based on PyTorch. It is a part of the OpenMMLab project. The m

OpenMMLab 2.7k Jan 07, 2023
Neural Style and MSG-Net

PyTorch-Style-Transfer This repo provides PyTorch Implementation of MSG-Net (ours) and Neural Style (Gatys et al. CVPR 2016), which has been included

Hang Zhang 904 Dec 21, 2022
Real-time VIBE: Frame by Frame Inference of VIBE (Video Inference for Human Body Pose and Shape Estimation)

Real-time VIBE Inference VIBE frame-by-frame. Overview This is a frame-by-frame inference fork of VIBE at [https://github.com/mkocabas/VIBE]. Usage: i

23 Jul 02, 2022
Collection of NLP model explanations and accompanying analysis tools

Thermostat is a large collection of NLP model explanations and accompanying analysis tools. Combines explainability methods from the captum library wi

126 Nov 22, 2022
A framework for Quantification written in Python

QuaPy QuaPy is an open source framework for quantification (a.k.a. supervised prevalence estimation, or learning to quantify) written in Python. QuaPy

41 Dec 14, 2022