Self-Supervised Learning of Event-based Optical Flow with Spiking Neural Networks

Overview

Self-Supervised Learning of Event-based Optical Flow with Spiking Neural Networks

Work accepted at NeurIPS'21 [paper, video].

If you use this code in an academic context, please cite our work:

@article{hagenaarsparedesvalles2021ssl,
  title={Self-Supervised Learning of Event-Based Optical Flow with Spiking Neural Networks},
  author={Hagenaars, Jesse and Paredes-Vall\'es, Federico and de Croon, Guido},
  journal={Advances in Neural Information Processing Systems},
  volume={34},
  year={2021}
}

This code allows for the reproduction of the experiments leading to the results in Section 4.1.

Usage

This project uses Python >= 3.7.3 and we strongly recommend the use of virtual environments. If you don't have an environment manager yet, we recommend pyenv. It can be installed via:

curl https://pyenv.run | bash

Make sure your ~/.bashrc file contains the following:

export PATH="$HOME/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"

After that, restart your terminal and run:

pyenv update

To set up your environment with pyenv first install the required python distribution and make sure the installation is successful (i.e., no errors nor warnings):

pyenv install -v 3.7.3

Once this is done, set up the environment and install the required libraries:

pyenv virtualenv 3.7.3 event_flow
pyenv activate event_flow

pip install --upgrade pip==20.0.2

cd event_flow/
pip install -r requirements.txt

Download datasets

In this work, we use multiple datasets:

These datasets can be downloaded in the expected HDF5 data format from here, and are expected at event_flow/datasets/data/ (as shown above).

Download size: 19.4 GB. Uncompressed size: 94 GB.

Details about the structure of these files can be found in event_flow/datasets/tools/.

Download models

The pretrained models can be downloaded from here, and are expected at event_flow/mlruns/.

In this project we use MLflow to keep track of the experiments. To visualize the models that are available, alongside other useful details and evaluation metrics, run the following from the home directory of the project:

mlflow ui

and access http://127.0.0.1:5000 from your browser of choice.

Inference

To estimate optical flow from event sequences from the MVSEC dataset and compute the average endpoint error and percentage of outliers, run:

python eval_flow.py <model_name> --config configs/eval_MVSEC.yml

# for example:
python eval_flow.py LIFFireNet --config configs/eval_MVSEC.yml

where <model_name> is the name of MLflow run to be evaluated. Note that, if a run does not have a name (this would be the case for your own trained models), you can evaluated it through its run ID (also visible through MLflow).

To estimate optical flow from event sequences from the ECD or HQF datasets, run:

python eval_flow.py <model_name> --config configs/eval_ECD.yml
python eval_flow.py <model_name> --config configs/eval_HQF.yml

# for example:
python eval_flow.py LIFFireNet --config configs/eval_ECD.yml

Note that the ECD and HQF datasets lack ground truth optical flow data. Therefore, we evaluate the quality of the estimated event-based optical flow via the self-supervised FWL (Stoffregen and Scheerlinck, ECCV'20) and RSAT (ours, Appendix C) metrics.

Results from these evaluations are stored as MLflow artifacts.

In configs/, you can find the configuration files associated to these scripts and vary the inference settings (e.g., number of input events, activate/deactivate visualization).

Training

Run:

python train_flow.py --config configs/train_ANN.yml
python train_flow.py --config configs/train_SNN.yml

to train an traditional artificial neural network (ANN, default: FireNet) or a spiking neural network (SNN, default: LIF-FireNet), respectively. In configs/, you can find the aforementioned configuration files and vary the training settings (e.g., model, number of input events, activate/deactivate visualization). For other models available, see models/model.py.

Note that we used a batch size of 8 in our experiments. Depending on your computational resources, you may need to lower this number.

During and after the training, information about your run can be visualized through MLflow.

Uninstalling pyenv

Once you finish using our code, you can uninstall pyenv from your system by:

  1. Removing the pyenv configuration lines from your ~/.bashrc.
  2. Removing its root directory. This will delete all Python versions that were installed under the $HOME/.pyenv/versions/ directory:
rm -rf $HOME/.pyenv/
Owner
TU Delft
TU Delft - MAVLab
TU Delft
YOLO5Face: Why Reinventing a Face Detector (https://arxiv.org/abs/2105.12931)

Introduction Yolov5-face is a real-time,high accuracy face detection. Performance Single Scale Inference on VGA resolution(max side is equal to 640 an

DeepCam Shenzhen 1.4k Jan 07, 2023
Code for NeurIPS2021 submission "A Surrogate Objective Framework for Prediction+Programming with Soft Constraints"

This repository is the code for NeurIPS 2021 submission "A Surrogate Objective Framework for Prediction+Programming with Soft Constraints". Edit 2021/

10 Dec 20, 2022
PyTorch implementation of ENet

PyTorch-ENet PyTorch (v1.1.0) implementation of ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation, ported from the lua-torc

David Silva 333 Dec 29, 2022
DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision

The Official PyTorch Implementation of DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision

Shiyi Lan 3 Oct 15, 2021
A self-supervised learning framework for audio-visual speech

AV-HuBERT (Audio-Visual Hidden Unit BERT) Learning Audio-Visual Speech Representation by Masked Multimodal Cluster Prediction Robust Self-Supervised A

Meta Research 431 Jan 07, 2023
[CVPR 2022] Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions" paper

template-pose Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions

Van Nguyen Nguyen 92 Dec 28, 2022
PyTorch Implementation of Backbone of PicoDet

PicoDet-Backbone PyTorch Implementation of Backbone of PicoDet Original Implementation is implemented on PaddlePaddle. Example picodet_l_backbone = ES

Yonghye Kwon 7 Jul 12, 2022
This is RFA-Toolbox, a simple and easy-to-use library that allows you to optimize your neural network architectures using receptive field analysis (RFA) and create graph visualizations of your architecture.

ReceptiveFieldAnalysisToolbox This is RFA-Toolbox, a simple and easy-to-use library that allows you to optimize your neural network architectures usin

84 Nov 23, 2022
Official code repository for "Exploring Neural Models for Query-Focused Summarization"

Query-Focused Summarization Official code repository for "Exploring Neural Models for Query-Focused Summarization" This is a work in progress. Expect

Salesforce 29 Dec 18, 2022
Retrieval.pytorch - The code we used in [2020 DIGIX]

Retrieval.pytorch - The code we used in [2020 DIGIX]

Guo-Hua Wang 2 Feb 07, 2022
Code for paper [ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot] (ICCV 2021, oral))

ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot This repository is the official PyTorch implementation of ICCV-21 pape

Jiarui 21 May 09, 2022
Keras implementation of "One pixel attack for fooling deep neural networks" using differential evolution on Cifar10 and ImageNet

One Pixel Attack How simple is it to cause a deep neural network to misclassify an image if an attacker is only allowed to modify the color of one pix

Dan Kondratyuk 1.2k Dec 26, 2022
[NeurIPS 2021] Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data

Near-Duplicate Video Retrieval with Deep Metric Learning This repository contains the Tensorflow implementation of the paper Near-Duplicate Video Retr

Liming Jiang 238 Nov 25, 2022
Koopman operator identification library in Python

pykoop pykoop is a Koopman operator identification library written in Python. It allows the user to specify Koopman lifting functions and regressors i

DECAR Systems Group 34 Jan 04, 2023
Allele-specific pipeline for unbiased read mapping(WIP), QTL discovery(WIP), and allelic-imbalance analysis

WASP2 (Currently in pre-development): Allele-specific pipeline for unbiased read mapping(WIP), QTL discovery(WIP), and allelic-imbalance analysis Requ

McVicker Lab 2 Aug 11, 2022
This source code is implemented using keras library based on "Automatic ocular artifacts removal in EEG using deep learning"

CSP_Deep_EEG This source code is implemented using keras library based on "Automatic ocular artifacts removal in EEG using deep learning" {https://www

Seyed Mahdi Roostaiyan 2 Nov 08, 2022
Stock-history-display - something like a easy yearly review for your stock performance

Stock History Display Available on Heroku: https://stock-history-display.herokua

LiaoJJ 1 Jan 07, 2022
Repository for reproducing `Model-Based Robust Deep Learning`

Model-Based Robust Deep Learning (MBRDL) In this repository, we include the code necessary for reproducing the code used in Model-Based Robust Deep Le

Alex Robey 16 Sep 19, 2022
SPTAG: A library for fast approximate nearest neighbor search

SPTAG: A library for fast approximate nearest neighbor search SPTAG SPTAG (Space Partition Tree And Graph) is a library for large scale vector approxi

Microsoft 4.3k Jan 01, 2023
Pytorch implementation of Masked Auto-Encoder

Masked Auto-Encoder (MAE) Pytorch implementation of Masked Auto-Encoder: Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick

Jiyuan 22 Dec 13, 2022