Differentiable Factor Graph Optimization for Learning Smoothers @ IROS 2021

Related tags

Deep Learningdfgo
Overview

Differentiable Factor Graph Optimization for Learning Smoothers

mypy

Figure describing the overall training pipeline proposed by our IROS paper. Contains five sections, arranged left to right: (1) system models, (2) factor graphs for state estimation, (3) MAP inference, (4) state estimates, and (5) errors with respect to ground-truth. Arrows show how gradients are backpropagated from right to left, starting directly from the final stage (error with respect to ground-truth) back to parameters of the system models.

Overview

Code release for our IROS 2021 conference paper:

Brent Yi1, Michelle A. Lee1, Alina Kloss2, Roberto Martín-Martín1, and Jeannette Bohg1. Differentiable Factor Graph Optimization for Learning Smoothers. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2021.

1Stanford University, {brentyi,michellelee,robertom,bohg}@cs.stanford.edu
2Max Planck Institute for Intelligent Systems, [email protected]


This repository contains models, training scripts, and experimental results, and can be used to either reproduce our results or as a reference for implementation details.

Significant chunks of the code written for this paper have been factored out of this repository and released as standalone libraries, which may be useful for building on our work. You can find each of them linked here:

  • jaxfg is our core factor graph optimization library.
  • jaxlie is our Lie theory library for working with rigid body transformations.
  • jax_dataclasses is our library for building JAX pytrees as dataclasses. It's similar to flax.struct, but has workflow improvements for static analysis and nested structures.
  • jax-ekf contains our EKF implementation.

Status

Included in this repo for the disk task:

  • Smoother training & results
    • Training: python train_disk_fg.py --help
    • Evaluation: python cross_validate.py --experiment-paths ./experiments/disk/fg/**/
  • Filter baseline training & results
    • Training: python train_disk_ekf.py --help
    • Evaluation: python cross_validate.py --experiment-paths ./experiments/disk/ekf/**/
  • LSTM baseline training & results
    • Training: python train_disk_lstm.py --help
    • Evaluation: python cross_validate.py --experiment-paths ./experiments/disk/lstm/**/

And, for the visual odometry task:

  • Smoother training & results (including ablations)
    • Training: python train_kitti_fg.py --help
    • Evaluation: python cross_validate.py --experiment-paths ./experiments/kitti/fg/**/
  • EKF baseline training & results
    • Training: python train_kitti_ekf.py --help
    • Evaluation: python cross_validate.py --experiment-paths ./experiments/kitti/ekf/**/
  • LSTM baseline training & results
    • Training: python train_kitti_lstm.py --help
    • Evaluation: python cross_validate.py --experiment-paths ./experiments/kitti/lstm/**/

Note that **/ indicates a recursive glob in zsh. This can be emulated in bash>4 via the globstar option (shopt -q globstar).

We've done our best to make our research code easy to parse, but it's still being iterated on! If you have questions, suggestions, or any general comments, please reach out or file an issue.

Setup

We use Python 3.8 and miniconda for development.

  1. Any calls to CHOLMOD (via scikit-sparse, sometimes used for eval but never for training itself) will require SuiteSparse:

    # Mac
    brew install suite-sparse
    
    # Debian
    sudo apt-get install -y libsuitesparse-dev
  2. Dependencies can be installed via pip:

    pip install -r requirements.txt

    In addition to JAX and the first-party dependencies listed above, note that this also includes various other helpers:

    • datargs (currently forked) is super useful for building type-safe argument parsers.
    • torch's Dataset and DataLoader interfaces are used for training.
    • fannypack contains some utilities for downloading datasets, working with PDB, polling repository commit hashes.

The requirements.txt provided will install the CPU version of JAX by default. For CUDA support, please see instructions from the JAX team.

Datasets

Datasets synced from Google Drive and loaded via h5py automatically as needed. If you're interested in downloading them manually, see lib/kitti/data_loading.py and lib/disk/data_loading.py.

Training

The naming convention for training scripts is as follows: train_{task}_{model type}.py.

All of the training scripts provide a command-line interface for configuring experiment details and hyperparameters. The --help flag will summarize these settings and their default values. For example, to run the training script for factor graphs on the disk task, try:

> python train_disk_fg.py --help

Factor graph training script for disk task.

optional arguments:
  -h, --help            show this help message and exit
  --experiment-identifier EXPERIMENT_IDENTIFIER
                        (default: disk/fg/default_experiment/fold_{dataset_fold})
  --random-seed RANDOM_SEED
                        (default: 94305)
  --dataset-fold {0,1,2,3,4,5,6,7,8,9}
                        (default: 0)
  --batch-size BATCH_SIZE
                        (default: 32)
  --train-sequence-length TRAIN_SEQUENCE_LENGTH
                        (default: 20)
  --num-epochs NUM_EPOCHS
                        (default: 30)
  --learning-rate LEARNING_RATE
                        (default: 0.0001)
  --warmup-steps WARMUP_STEPS
                        (default: 50)
  --max-gradient-norm MAX_GRADIENT_NORM
                        (default: 10.0)
  --noise-model {CONSTANT,HETEROSCEDASTIC}
                        (default: CONSTANT)
  --loss {JOINT_NLL,SURROGATE_LOSS}
                        (default: SURROGATE_LOSS)
  --pretrained-virtual-sensor-identifier PRETRAINED_VIRTUAL_SENSOR_IDENTIFIER
                        (default: disk/pretrain_virtual_sensor/fold_{dataset_fold})

When run, train scripts serialize experiment configurations to an experiment_config.yaml file. You can find hyperparameters in the experiments/ directory for all results presented in our paper.

Evaluation

All evaluation metrics are recorded at train time. The cross_validate.py script can be used to compute metrics across folds:

# Summarize all experiments with means and standard errors of recorded metrics.
python cross_validate.py

# Include statistics for every fold -- this is much more data!
python cross_validate.py --disaggregate

# We can also glob for a partial set of experiments; for example, all of the
# disk experiments.
# Note that the ** wildcard may fail in bash; see above for a fix.
python cross_validate.py --experiment-paths ./experiments/disk/**/

Acknowledgements

We'd like to thank Rika Antonova, Kevin Zakka, Nick Heppert, Angelina Wang, and Philipp Wu for discussions and feedback on both our paper and codebase. Our software design also benefits from ideas from several open-source projects, including Sophus, GTSAM, Ceres Solver, minisam, and SwiftFusion.

This work is partially supported by the Toyota Research Institute (TRI) and Google. This article solely reflects the opinions and conclusions of its authors and not TRI, Google, or any entity associated with TRI or Google.

Owner
Brent Yi
Brent Yi
Trading Strategies for Freqtrade

Freqtrade Strategies Strategies for Freqtrade, developed primarily in a partnership between @werkkrew and @JimmyNixx from the Freqtrade Discord. Use t

Bryan Chain 242 Jan 07, 2023
AirLoop: Lifelong Loop Closure Detection

AirLoop This repo contains the source code for paper: Dasong Gao, Chen Wang, Sebastian Scherer. "AirLoop: Lifelong Loop Closure Detection." arXiv prep

Chen Wang 53 Jan 03, 2023
Code for layerwise detection of linguistic anomaly paper (ACL 2021)

Layerwise Anomaly This repository contains the source code and data for our ACL 2021 paper: "How is BERT surprised? Layerwise detection of linguistic

6 Dec 07, 2022
This is a tensorflow-based rotation detection benchmark, also called AlphaRotate.

AlphaRotate: A Rotation Detection Benchmark using TensorFlow Abstract AlphaRotate is maintained by Xue Yang with Shanghai Jiao Tong University supervi

yangxue 972 Jan 05, 2023
This repository contains the code for EMNLP-2021 paper "Word-Level Coreference Resolution"

Word-Level Coreference Resolution This is a repository with the code to reproduce the experiments described in the paper of the same name, which was a

79 Dec 27, 2022
CSAW-M: An Ordinal Classification Dataset for Benchmarking Mammographic Masking of Cancer

CSAW-M This repository contains code for CSAW-M: An Ordinal Classification Dataset for Benchmarking Mammographic Masking of Cancer. Source code for tr

Yue Liu 7 Oct 11, 2022
PIKA: a lightweight speech processing toolkit based on Pytorch and (Py)Kaldi

PIKA: a lightweight speech processing toolkit based on Pytorch and (Py)Kaldi PIKA is a lightweight speech processing toolkit based on Pytorch and (Py)

336 Nov 25, 2022
MLPs for Vision and Langauge Modeling (Coming Soon)

MLP Architectures for Vision-and-Language Modeling: An Empirical Study MLP Architectures for Vision-and-Language Modeling: An Empirical Study (Code wi

Yixin Nie 27 May 09, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code here will be included in upstream Pytorch eventually. The intenti

NVIDIA Corporation 6.9k Jan 03, 2023
Implementation of QuickDraw - an online game developed by Google, combined with AirGesture - a simple gesture recognition application

QuickDraw - AirGesture Introduction Here is my python source code for QuickDraw - an online game developed by google, combined with AirGesture - a sim

Viet Nguyen 89 Dec 18, 2022
Class-Balanced Loss Based on Effective Number of Samples. CVPR 2019

Class-Balanced Loss Based on Effective Number of Samples Tensorflow code for the paper: Class-Balanced Loss Based on Effective Number of Samples Yin C

Yin Cui 546 Jan 08, 2023
Happywhale - Whale and Dolphin Identification Silver🥈 Solution (26/1588)

Kaggle-Happywhale Happywhale - Whale and Dolphin Identification Silver 🥈 Solution (26/1588) 竞赛方案思路 图像数据预处理-标志性特征图片裁剪:首先根据开源的标注数据训练YOLOv5x6目标检测模型,将训练集

Franxx 20 Nov 14, 2022
A curated list of awesome resources combining Transformers with Neural Architecture Search

A curated list of awesome resources combining Transformers with Neural Architecture Search

Yash Mehta 173 Jan 03, 2023
MIRACLE (Missing data Imputation Refinement And Causal LEarning)

MIRACLE (Missing data Imputation Refinement And Causal LEarning) Code Author: Trent Kyono This repository contains the code used for the "MIRACLE: Cau

van_der_Schaar \LAB 15 Dec 29, 2022
The PASS dataset: pretrained models and how to get the data - PASS: Pictures without humAns for Self-Supervised Pretraining

The PASS dataset: pretrained models and how to get the data - PASS: Pictures without humAns for Self-Supervised Pretraining

Yuki M. Asano 249 Dec 22, 2022
Graduation Project

Gesture-Detection-and-Depth-Estimation This is my graduation project. (1) In this project, I use the YOLOv3 object detection model to detect gesture i

ChaosAT 1 Nov 23, 2021
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing

Anycost GAN video | paper | website Anycost GANs for Interactive Image Synthesis and Editing Ji Lin, Richard Zhang, Frieder Ganz, Song Han, Jun-Yan Zh

MIT HAN Lab 726 Dec 28, 2022
Unofficial implementation of Pix2SEQ

Unofficial-Pix2seq: A Language Modeling Framework for Object Detection Unofficial implementation of Pix2SEQ. Please use this code with causion. Many i

159 Dec 12, 2022
The devkit of the nuScenes dataset.

nuScenes devkit Welcome to the devkit of the nuScenes and nuImages datasets. Overview Changelog Devkit setup nuImages nuImages setup Getting started w

Motional 1.6k Jan 05, 2023
A simplistic and efficient pure-python neural network library from Phys Whiz with CPU and GPU support.

A simplistic and efficient pure-python neural network library from Phys Whiz with CPU and GPU support.

Manas Sharma 19 Feb 28, 2022