ESL: Event-based Structured Light

Related tags

Deep LearningESL
Overview

ESL: Event-based Structured Light

Video (click on the image)

ESL: Event-based Structured Light

This is the code for the 2021 3DV paper ESL: Event-based Structured Light by Manasi Muglikar, Guillermo Gallego, and Davide Scaramuzza.

Citation

A pdf of the paper is available here. If you use this code in an academic context, please cite the following work:

@InProceedings{Muglikar213DV,
  author = {Manasi Muglikar and Guillermo Gallego and Davide Scaramuzza},
  title = {ESL: Event-based Structured Light},
  booktitle = {{IEEE} International Conference on 3D Vision.(3DV)},
  month = {Dec},
  year = {2021}
}

Installation

 conda create -y -n ESL python=3.
 conda activate ESL
 conda install numba
 conda install -y -c anaconda numpy scipy
 conda install -y -c conda-forge h5py opencv tqdm matplotlib pyyaml pylops
 conda install -c open3d-admin -c conda-forge open3d

Data pre-processing

The recordings are available in numpy file format here. You can downlaoad the city_of_lights events file from here. Please unzip it and ensure the data is organized as follows:

-dataset
  calib.yaml
  -city_of_lights/
    -scans_np/
      -cam_ts00000.npy
      .
      .
      .
      -cam_ts00060.npy

The numpy file refers to the camera time map for each projector scan. The time map is normalized in the range [0, 1]. The time map for the city_of_lights looks as follows:

The calibration file for our setup, data/calib.yaml, follows the OpenCV yaml format.

Depth computation

To compute depth from the numpy files use the script below:

    python python/compute_depth.py -object_dir=dataset/static/city_of_lights/ -calib=dataset/calib.yaml -num_scans 1

The estimated depth will be saved as numpy files in the depth_dir/esl_dir subfolder of the dataset directory. The estimated depth for the city_of_lights dataset can be visualized using the visualization script visualize_depth.py:

Evaluation

We evaluate the performance for static sequences using two metrics with respect to ground truth: root mean square error (RMSE) and Fill-Rate (i.e., completion).

python python/evaluate.py -object_dir=dataset/static/city_of_lights

The output should look as follows:

Average scene depth:  105.47189659236103
============================Stats=============================
========== ESL stats ==============
Fill rate: 0.9178120881189983
RMSE: 1.160292387864739
=======================================================================

Additional resources on Event Cameras

Owner
Robotics and Perception Group
Robotics and Perception Group
TensorFlow 101: Introduction to Deep Learning for Python Within TensorFlow

TensorFlow 101: Introduction to Deep Learning I have worked all my life in Machine Learning, and I've never seen one algorithm knock over its benchmar

Sefik Ilkin Serengil 896 Jan 04, 2023
[ACM MM2021] MGH: Metadata Guided Hypergraph Modeling for Unsupervised Person Re-identification

Introduction This project is developed based on FastReID, which is an ongoing ReID project. Projects BUC In projects/BUC, we implement AAAI 2019 paper

WuYiming 7 Apr 13, 2022
A general framework for deep learning experiments under PyTorch based on pytorch-lightning

torchx Torchx is a general framework for deep learning experiments under PyTorch based on pytorch-lightning. TODO list gan-like training wrapper text

Yingtian Liu 6 Mar 17, 2022
Task Transformer Network for Joint MRI Reconstruction and Super-Resolution (MICCAI 2021)

T2Net Task Transformer Network for Joint MRI Reconstruction and Super-Resolution (MICCAI 2021) [Paper][Code] Dependencies numpy==1.18.5 scikit_image==

64 Nov 23, 2022
Get a Grip! - A robotic system for remote clinical environments.

Get a Grip! Within clinical environments, sterilization is an essential procedure for disinfecting surgical and medical instruments. For our engineeri

Jay Sharma 1 Jan 05, 2022
Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers"

Recurrent Fast Weight Programmers This is the official repository containing the code we used to produce the experimental results reported in the pape

IDSIA 36 Nov 15, 2022
FLVIS: Feedback Loop Based Visual Initial SLAM

FLVIS Feedback Loop Based Visual Inertial SLAM 1-Video EuRoC DataSet MH_05 Handheld Test in Lab FlVIS on UAV Platform 2-Relevent Publication: Under Re

UAV Lab - HKPolyU 182 Dec 04, 2022
Source Code for our paper: Understand me, if you refer to Aspect Knowledge: Knowledge-aware Gated Recurrent Memory Network

KaGRMN-DSG_ABSA This repository contains the PyTorch source Code for our paper: Understand me, if you refer to Aspect Knowledge: Knowledge-aware Gated

XingBowen 4 May 20, 2022
Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)

Length-Adaptive Transformer This is the official Pytorch implementation of Length-Adaptive Transformer. For detailed information about the method, ple

Clova AI Research 93 Dec 28, 2022
A implemetation of the LRCN in mxnet

A implemetation of the LRCN in mxnet ##Abstract LRCN is a combination of CNN and RNN ##Installation Download UCF101 dataset ./avi2jpg.sh to split the

44 Aug 25, 2022
Pun Detection and Location

Pun Detection and Location β€œThe Boating Store Had Its Best Sail Ever”: Pronunciation-attentive Contextualized Pun Recognition Yichao Zhou, Jyun-yu Jia

lawson 3 May 13, 2022
Code for the paper: Sketch Your Own GAN

Sketch Your Own GAN Project | Paper | Youtube Our method takes in one or a few hand-drawn sketches and customizes an off-the-shelf GAN to match the in

677 Dec 28, 2022
[CVPR'21] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving

TransFuser This repository contains the code for the CVPR 2021 paper Multi-Modal Fusion Transformer for End-to-End Autonomous Driving. If you find our

695 Jan 05, 2023
Code for the ECCV2020 paper "A Differentiable Recurrent Surface for Asynchronous Event-Based Data"

A Differentiable Recurrent Surface for Asynchronous Event-Based Data Code for the ECCV2020 paper "A Differentiable Recurrent Surface for Asynchronous

Marco Cannici 21 Oct 05, 2022
Easy and Efficient Object Detector

EOD Easy and Efficient Object Detector EOD (Easy and Efficient Object Detection) is a general object detection model production framework. It aim on p

381 Jan 01, 2023
Implementation of the paper Scalable Intervention Target Estimation in Linear Models (NeurIPS 2021), and the code to generate simulation results.

Scalable Intervention Target Estimation in Linear Models Implementation of the paper Scalable Intervention Target Estimation in Linear Models (NeurIPS

0 Oct 25, 2021
The official PyTorch implementation of paper BBN: Bilateral-Branch Network with Cumulative Learning for Long-Tailed Visual Recognition

BBN: Bilateral-Branch Network with Cumulative Learning for Long-Tailed Visual Recognition Boyan Zhou, Quan Cui, Xiu-Shen Wei*, Zhao-Min Chen This repo

Megvii-Nanjing 616 Dec 21, 2022
PyTorch implementation of the cross-modality generative model that synthesizes dance from music.

Dancing to Music PyTorch implementation of the cross-modality generative model that synthesizes dance from music. Paper Hsin-Ying Lee, Xiaodong Yang,

NVIDIA Research Projects 485 Dec 26, 2022
This is the implementation of GGHL (A General Gaussian Heatmap Labeling for Arbitrary-Oriented Object Detection)

GGHL: A General Gaussian Heatmap Labeling for Arbitrary-Oriented Object Detection This is the implementation of GGHL πŸ‘‹ πŸ‘‹ πŸ‘‹ [Arxiv] [Google Drive][B

551 Dec 31, 2022
🧠 A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation.', ECCV 2016

Deep CORAL A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation. B Sun, K Saenko, ECCV 2016' Deep CORAL can learn

Andy Hsu 200 Dec 25, 2022