Real-time Joint Semantic Reasoning for Autonomous Driving

Overview

MultiNet

MultiNet is able to jointly perform road segmentation, car detection and street classification. The model achieves real-time speed and state-of-the-art performance in segmentation. Check out our paper for a detailed model description.

MultiNet is optimized to perform well at a real-time speed. It has two components: KittiSeg, which sets a new state-of-the art in road segmentation; and KittiBox, which improves over the baseline Faster-RCNN in both inference speed and detection performance.

The model is designed as an encoder-decoder architecture. It utilizes one VGG encoder and several independent decoders for each task. This repository contains generic code that combines several tensorflow models in one network. The code for the individual tasks is provided by the KittiSeg, KittiBox, and KittiClass repositories. These repositories are utilized as submodules in this project. This project is built to be compatible with the TensorVision back end, which allows for organizing experiments in a very clean way.

Requirements

The code requires Python 2.7, Tensorflow 1.0, as well as the following python libraries:

  • matplotlib
  • numpy
  • Pillow
  • scipy
  • runcython
  • commentjson

Those modules can be installed using: pip install numpy scipy pillow matplotlib runcython commentjson or pip install -r requirements.txt.

Setup

  1. Clone this repository: https://github.com/MarvinTeichmann/MultiNet.git
  2. Initialize all submodules: git submodule update --init --recursive
  3. cd submodules/KittiBox/submodules/utils/ && make to build cython code
  4. [Optional] Download Kitti Road Data:
    1. Retrieve kitti data url here: http://www.cvlibs.net/download.php?file=data_road.zip
    2. Call python download_data.py --kitti_url URL_YOU_RETRIEVED
  5. [Optional] Run cd submodules/KittiBox/submodules/KittiObjective2/ && make to build the Kitti evaluation code (see submodules/KittiBox/submodules/KittiObjective2/README.md for more information)

Running the model using demo.py only requires you to perform step 1-3. Step 4 and 5 is only required if you want to train your own model using train.py. Note that I recommend using download_data.py instead of downloading the data yourself. The script will also extract and prepare the data. See Section Manage data storage if you like to control where the data is stored.

To update MultiNet do:
  1. Pull all patches: git pull
  2. Update all submodules: git submodule update --init --recursive

If you forget the second step you might end up with an inconstant repository state. You will already have the new code for MultiNet but run it old submodule versions code. This can work, but I do not run any tests to verify this.

Tutorial

Getting started

Run: python demo.py --gpus 0 --input data/demo/um_000005.png to obtain a prediction using demo.png as input.

Run: python evaluate.py to evaluate a trained model.

Run: python train.py --hypes hypes/multinet2.json to train a multinet2

If you like to understand the code, I would recommend looking at demo.py first. I have documented each step as thoroughly as possible in this file.

Only training of MultiNet3 (joint detection and segmentation) is supported out of the box. The data to train the classification model is not public an those cannot be used to train the full MultiNet3 (detection, segmentation and classification). The full code is given here, so you can still train MultiNet3 if you have your own data.

Manage Data Storage

MultiNet allows to separate data storage from code. This is very useful in many server environments. By default, the data is stored in the folder MultiNet/DATA and the output of runs in MultiNet/RUNS. This behaviour can be changed by setting the bash environment variables: $TV_DIR_DATA and $TV_DIR_RUNS.

Include export TV_DIR_DATA="/MY/LARGE/HDD/DATA" in your .profile and the all data will be downloaded to /MY/LARGE/HDD/DATA/. Include export TV_DIR_RUNS="/MY/LARGE/HDD/RUNS" in your .profile and all runs will be saved to /MY/LARGE/HDD/RUNS/MultiNet

Modifying Model & Train on your own data

The model is controlled by the file hypes/multinet3.json. This file points the code to the implementation of the submodels. The MultiNet code then loads all models provided and integrates the decoders into one neural network. To train on your own data, it should be enough to modify the hype files of the submodels. A good start will be the KittiSeg model, which is very well documented.

    "models": {
        "segmentation" : "../submodules/KittiSeg/hypes/KittiSeg.json",
        "detection" : "../submodules/KittiBox/hypes/kittiBox.json",
        "road" : "../submodules/KittiClass/hypes/KittiClass.json"
    },

RUNDIR and Experiment Organization

MultiNet helps you to organize a large number of experiments. To do so, the output of each run is stored in its own rundir. Each rundir contains:

  • output.log a copy of the training output which was printed to your screen
  • tensorflow events tensorboard can be run in rundir
  • tensorflow checkpoints the trained model can be loaded from rundir
  • [dir] images a folder containing example output images. image_iter controls how often the whole validation set is dumped
  • [dir] model_files A copy of all source code need to build the model. This can be very useful of you have many versions of the model.

To keep track of all the experiments, you can give each rundir a unique name with the --name flag. The --project flag will store the run in a separate subfolder allowing to run different series of experiments. As an example, python train.py --project batch_size_bench --name size_5 will use the following dir as rundir: $TV_DIR_RUNS/KittiSeg/batch_size_bench/size_5_KittiSeg_2017_02_08_13.12.

The flag --nosave is very useful to not spam your rundir.

Useful Flags & Variabels

Here are some Flags which will be useful when working with KittiSeg and TensorVision. All flags are available across all scripts.

--hypes : specify which hype-file to use
--logdir : specify which logdir to use
--gpus : specify on which GPUs to run the code
--name : assign a name to the run
--project : assign a project to the run
--nosave : debug run, logdir will be set to debug

In addition the following TensorVision environment Variables will be useful:

$TV_DIR_DATA: specify meta directory for data
$TV_DIR_RUNS: specify meta directory for output
$TV_USE_GPUS: specify default GPU behaviour.

On a cluster it is useful to set $TV_USE_GPUS=force. This will make the flag --gpus mandatory and ensure, that run will be executed on the right GPU.

Citation

If you benefit from this code, please cite our paper:

@article{teichmann2016multinet,
  title={MultiNet: Real-time Joint Semantic Reasoning for Autonomous Driving},
  author={Teichmann, Marvin and Weber, Michael and Zoellner, Marius and Cipolla, Roberto and Urtasun, Raquel},
  journal={arXiv preprint arXiv:1612.07695},
  year={2016}
}
Owner
Marvin Teichmann
Germany Phd student. Working on Deep Learning and Computer Vision projects.
Marvin Teichmann
The code for replicating the experiments from the LFI in SSMs with Unknown Dynamics paper.

Likelihood-Free Inference in State-Space Models with Unknown Dynamics This package contains the codes required to run the experiments in the paper. Th

Alex Aushev 0 Dec 27, 2021
C3d-pytorch - Pytorch porting of C3D network, with Sports1M weights

C3D for pytorch This is a pytorch porting of the network presented in the paper Learning Spatiotemporal Features with 3D Convolutional Networks How to

Davide Abati 311 Jan 06, 2023
StableSims is an open-source project aimed at simulating MakerDAO's Dai stablecoin system

StableSims is an open-source project aimed at simulating MakerDAO's Dai stablecoin system, initially used for researching optimal incentive parameters for Liquidations 2.0.

Blockchain at Berkeley 52 Nov 21, 2022
A Lighting Pytorch Framework for Recommendation System, Easy-to-use and Easy-to-extend.

Torch-RecHub A Lighting Pytorch Framework for Recommendation Models, Easy-to-use and Easy-to-extend. 安装 pip install torch-rechub 主要特性 scikit-learn风格易用

Mincai Lai 67 Jan 04, 2023
PixelPick This is an official implementation of the paper "All you need are a few pixels: semantic segmentation with PixelPick."

PixelPick This is an official implementation of the paper "All you need are a few pixels: semantic segmentation with PixelPick." [Project page] [Paper

Gyungin Shin 59 Sep 25, 2022
Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"

Deformable Attention Implementation of Deformable Attention from this paper in Pytorch, which appears to be an improvement to what was proposed in DET

Phil Wang 128 Dec 24, 2022
This repo is a PyTorch implementation for Paper "Unsupervised Learning for Cuboid Shape Abstraction via Joint Segmentation from Point Clouds"

Unsupervised Learning for Cuboid Shape Abstraction via Joint Segmentation from Point Clouds This repository is a PyTorch implementation for paper: Uns

Kaizhi Yang 42 Dec 09, 2022
Code Repository for Liquid Time-Constant Networks (LTCs)

Liquid time-constant Networks (LTCs) [Update] A Pytorch version is added in our sister repository: https://github.com/mlech26l/keras-ncp This is the o

Ramin Hasani 553 Dec 27, 2022
Local Attention - Flax module for Jax

Local Attention - Flax Autoregressive Local Attention - Flax module for Jax Install $ pip install local-attention-flax Usage from jax import random fr

Phil Wang 16 Jun 16, 2022
A simple, fully convolutional model for real-time instance segmentation.

You Only Look At CoefficienTs ██╗ ██╗ ██████╗ ██╗ █████╗ ██████╗████████╗ ╚██╗ ██╔╝██╔═══██╗██║ ██╔══██╗██╔════╝╚══██╔══╝ ╚██

Daniel Bolya 4.6k Dec 30, 2022
A port of muP to JAX/Haiku

MUP for Haiku This is a (very preliminary) port of Yang and Hu et al.'s μP repo to Haiku and JAX. It's not feature complete, and I'm very open to sugg

18 Dec 30, 2022
Final project code: Implementing BicycleGAN, for CIS680 FA21 at University of Pennsylvania

680 Final Project: BicycleGAN Haoran Tang Instructions 1. Training To train the network, please run train.py. Change hyper-parameters and folder paths

Haoran Tang 0 Apr 22, 2022
chainladder - Property and Casualty Loss Reserving in Python

chainladder (python) chainladder - Property and Casualty Loss Reserving in Python This package gets inspiration from the popular R ChainLadder package

Casualty Actuarial Society 130 Dec 07, 2022
hipCaffe: the HIP port of Caffe

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Cent

ROCm Software Platform 126 Dec 05, 2022
Fine-Tune EleutherAI GPT-Neo to Generate Netflix Movie Descriptions in Only 47 Lines of Code Using Hugginface And DeepSpeed

GPT-Neo-2.7B Fine-Tuning Example Using HuggingFace & DeepSpeed Installation cd venv/bin ./pip install -r ../../requirements.txt ./pip install deepspe

Nikita 180 Jan 05, 2023
Wanli Li and Tieyun Qian: Exploit a Multi-head Reference Graph for Semi-supervised Relation Extraction, IJCNN 2021

MRefG Wanli Li and Tieyun Qian: "Exploit a Multi-head Reference Graph for Semi-supervised Relation Extraction", IJCNN 2021 1. Requirements To reproduc

万理 5 Jul 26, 2022
AlphaBot2 Pi Core software for interfacing with the various components.

AlphaBot2-Pi-Core AlphaBot2 Pi Core software for interfacing with the various components. This project is currently a W.I.P. I will update this readme

KyleDev 1 Feb 13, 2022
MM1 and MMC Queue Simulation using python - Results and parameters in excel and csv files

implementation of MM1 and MMC Queue on randomly generated data and evaluate simulation results then compare with analytical results and draw a plot curve for them, simulate some integrals and compare

Mohamadreza Rezaei 1 Jan 19, 2022
Time Series Cross-Validation -- an extension for scikit-learn

TSCV: Time Series Cross-Validation This repository is a scikit-learn extension for time series cross-validation. It introduces gaps between the traini

Wenjie Zheng 222 Jan 01, 2023
Instance-wise Occlusion and Depth Orders in Natural Scenes (CVPR 2022)

Instance-wise Occlusion and Depth Orders in Natural Scenes Official source code. Appears at CVPR 2022 This repository provides a new dataset, named In

27 Dec 27, 2022