Official PyTorch Implementation of Unsupervised Learning of Scene Flow Estimation Fusing with Local Rigidity

Overview

UnRigidFlow

This is the official PyTorch implementation of UnRigidFlow (IJCAI2019).

Here are two sample results (~10MB gif for each) of our unsupervised models.

KITTI 15 Cityscapes
kitti cityscapes

If you find this repo useful in your research, please consider citing:

@inproceedings{Liu:2019:unrigid, 
title = {Unsupervised Learning of Scene Flow Estimation Fusing with Local Rigidity}, 
author = {Liang Liu, Guangyao Zhai, Wenlong Ye, Yong Liu}, 
booktitle = {International Joint Conference on Artificial Intelligence, IJCAI}, 
year = {2019}
}

Requirements

This codebase was developed and tested with Python 3.5, Pytorch>=0.4.1, OpenCV 3.4, CUDA 9.0 and Ubuntu 16.04.

Most of the python packages can be installed by

pip3 install -r requirements.txt

In addition, Optimized correlation with CUDA kernel should be compiled manually with:

cd <correlation_package>
python3 setup.py install

and add <correlation_package> to $PYTHONPATH.

Note that if you are use PyTorch >= 1.0, you should make some changes, see NVIDIA/flownet2-pytorch#98.

Just replace #include <torch/torch.h> with #include <torch/extension.h> , adding #include <ATen/cuda/CUDAContext.h> and then replacing all at::globalContext().getCurrentCUDAStream() with at::cuda::getCurrentCUDAStream().

Training and Evaluation

We are mainly focused on KITTI benchmark. You will need to download all of the KITTI raw data and calibration files to train the model. You will also need the training files of KITTI 2012 and KITTI 2015 with calibration files [1], [2] for validating the models.

The complete training contains 3 steps:

  1. Train the flow model separately:

    python3 train.py -c configs/KITTI_flow.json
    
  2. Train the depth model separately:

    python3 train.py -c configs/KITTI_depth_stereo.json
    
  3. Train the flow and depth models jointly:

    python3 train.py -c configs/KITTI_rigid_flow_stereo.json
    

For evaluation, just adding --e options and modifying the corresponding model path for the above commands.

Pre-trained Models

You can download our pre-trained models, we provide the models as follow:

  • KITTI_flow: The separately trained optical flow network on KITTI raw data (from scratch)
  • KITTI_stereo_depth: The stereo depth network on KITTI raw data.
  • KITTI_flow_joint: The optical flow network jointly trained with stereo depth on KITTI raw data.

Acknowledgement

This repository refers some snippets from several great work, including PWC-Net, monodepth, UnFlow, UnDepthFlow, DF-Net. Although most of these are TensorFlow implementations, we are grateful for the sharing of these works, which save us a lot of time.

Owner
Liang Liu
Liang Liu
Artificial Neural network regression model to predict the energy output in a combined cycle power plant.

Energy_Output_Predictor Artificial Neural network regression model to predict the energy output in a combined cycle power plant. Abstract Energy outpu

1 Feb 11, 2022
Leaderboard, taxonomy, and curated list of few-shot object detection papers.

Leaderboard, taxonomy, and curated list of few-shot object detection papers.

Gabriel Huang 70 Jan 07, 2023
Llvlir - Low Level Variable Length Intermediate Representation

Low Level Variable Length Intermediate Representation Low Level Variable Length

Michael Clark 2 Jan 24, 2022
Pytorch implementation of "A simple neural network module for relational reasoning" (Relational Networks)

Pytorch implementation of Relational Networks - A simple neural network module for relational reasoning Implemented & tested on Sort-of-CLEVR task. So

Kim Heecheol 800 Dec 05, 2022
Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation

Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation Requirements This repository needs mmsegmentation Training To train

20 May 28, 2022
Deep metric learning methods implemented in Chainer

Deep Metric Learning Implementation of several methods for deep metric learning in Chainer v4.2.0. Proxy-NCA: No Fuss Distance Metric Learning using P

ronekko 156 Nov 28, 2022
This repository contains the entire code for our work "Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid Precoding"

Two-Timescale-DNN Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid Precoding This repository contains the entire code for our work

QiyuHu 3 Mar 07, 2022
GrabGpu_py: a scripts for grab gpu when gpu is free

GrabGpu_py a scripts for grab gpu when gpu is free. WaitCondition: gpu_memory

tianyuluan 3 Jun 18, 2022
This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".

SimMIM By Zhenda Xie*, Zheng Zhang*, Yue Cao*, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai and Han Hu*. This repo is the official implementation of

Microsoft 674 Dec 26, 2022
SAS: Self-Augmentation Strategy for Language Model Pre-training

SAS: Self-Augmentation Strategy for Language Model Pre-training This repository

Alibaba 5 Nov 02, 2022
Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

PyTorch Implementation of Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers 1 Using Colab Please notic

Hila Chefer 489 Jan 07, 2023
Learning Modified Indicator Functions for Surface Reconstruction

Learning Modified Indicator Functions for Surface Reconstruction In this work, we propose a learning-based approach for implicit surface reconstructio

4 Apr 18, 2022
PyTorch common framework to accelerate network implementation, training and validation

pytorch-framework PyTorch common framework to accelerate network implementation, training and validation. This framework is inspired by works from MML

Dongliang Cao 3 Dec 19, 2022
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" on Semantic Segmentation.

Swin Transformer for Semantic Segmentation of satellite images This repo contains the supported code and configuration files to reproduce semantic seg

23 Oct 10, 2022
The source code for 'Noisy-Labeled NER with Confidence Estimation' accepted by NAACL 2021

Kun Liu*, Yao Fu*, Chuanqi Tan, Mosha Chen, Ningyu Zhang, Songfang Huang, Sheng Gao. Noisy-Labeled NER with Confidence Estimation. NAACL 2021. [arxiv]

30 Nov 12, 2022
Global-Local Context Network for Person Search

Global-Local Context Network for Person Search Abstract: Person search aims to jointly localize and identify a query person from natural, uncropped im

Peng Zheng 15 Oct 17, 2022
Out-of-distribution detection using the pNML regret. NeurIPS2021

OOD Detection Load conda environment conda env create -f environment.yml or install requirements: while read requirement; do conda install --yes $requ

Koby Bibas 23 Dec 02, 2022
Exponential Graph is Provably Efficient for Decentralized Deep Training

Exponential Graph is Provably Efficient for Decentralized Deep Training This code repository is for the paper Exponential Graph is Provably Efficient

3 Apr 20, 2022
Unsupervised Foreground Extraction via Deep Region Competition

Unsupervised Foreground Extraction via Deep Region Competition [Paper] [Code] The official code repository for NeurIPS 2021 paper "Unsupervised Foregr

28 Nov 06, 2022
Code/data of the paper "Hand-Object Contact Prediction via Motion-Based Pseudo-Labeling and Guided Progressive Label Correction" (BMVC2021)

Hand-Object Contact Prediction (BMVC2021) This repository contains the code and data for the paper "Hand-Object Contact Prediction via Motion-Based Ps

Takuma Yagi 13 Nov 07, 2022