Multi-View Radar Semantic Segmentation

Related tags

Deep LearningMVRSS
Overview

Multi-View Radar Semantic Segmentation

Paper

teaser_schema

Multi-View Radar Semantic Segmentation, ICCV 2021.

Arthur Ouaknine, Alasdair Newson, Patrick Pérez, Florence Tupin, Julien Rebut

This repository groups the implemetations of the MV-Net and TMVA-Net architectures proposed in the paper of Ouaknine et al..

The models are trained and tested on the CARRADA dataset.

The CARRADA dataset is available on Arthur Ouaknine's personal web page at this link: https://arthurouaknine.github.io/codeanddata/carrada.

If you find this code useful for your research, please cite our paper:

@misc{ouaknine2021multiview,
      title={Multi-View Radar Semantic Segmentation},
      author={Arthur Ouaknine and Alasdair Newson and Patrick Pérez and Florence Tupin and Julien Rebut},
      year={2021},
      eprint={2103.16214},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Installation with Docker

It is strongly recommanded to use Docker with the provided Dockerfile containing all the dependencies.

  1. Clone the repo:
$ git clone https://github.com/ArthurOuaknine/MVRSS.git
  1. Create the Docker image:
$ cd MVRSS/
$ docker build . -t "mvrss:Dockerfile"

Note: The CARRADA dataset used for train and test is considered as already downloaded by default. If it is not the case, you can uncomment the corresponding command lines in the Dockerfile or follow the guidelines of the dedicated repository.

  1. Run a container and join an interactive session. Note that the option -v /host_path:/local_path is used to mount a volume (corresponding to a shared memory space) between the host machine and the Docker container and to avoid copying data (logs and datasets). You will be able to run the code on this session:
$ docker run -d --ipc=host -it -v /host_machine_path/datasets:/home/datasets_local -v /host_machine_path/logs:/home/logs --name mvrss --gpus all mvrss:Dockerfile sleep infinity
$ docker exec -it mvrss bash

Installation without Docker

You can either use Docker with the provided Dockerfile containing all the dependencies, or follow these steps.

  1. Clone the repo:
$ git clone https://github.com/ArthurOuaknine/MVRSS.git
  1. Install this repository using pip:
$ cd MVRSS/
$ pip install -e .

With this, you can edit the MVRSS code on the fly and import function and classes of MVRSS in other project as well.

  1. Install all the dependencies using pip and conda, please take a look at the Dockerfile for the list and versions of the dependencies.

  2. Optional. To uninstall this package, run:

$ pip uninstall MVRSS

You can take a look at the Dockerfile if you are uncertain about steps to install this project.

Running the code

In any case, it is mandatory to specify beforehand both the path where the CARRADA dataset is located and the path to store the logs and models. Example: I put the Carrada folder in /home/datasets_local, the path I should specify is /home/datasets_local. The same way if I store my logs in /home/logs. Please run the following command lines while adapting the paths to your settings:

$ cd MVRSS/mvrss/utils/
$ python set_paths.py --carrada /home/datasets_local --logs /home/logs

Training

In order to train a model, a JSON configuration file should be set. The configuration file corresponding to the selected parameters to train the TMVA-Net architecture is provided here: MVRSS/mvrss/config_files/tmvanet.json. To train the TMVA-Net architecture, please run the following command lines:

$ cd MVRSS/mvrss/
$ python train.py --cfg config_files/tmvanet.json

If you want to train the MV-Net architecture (baseline), please use the corresponding configuration file: mvnet.json.

Testing

To test a recorded model, you should specify the path to the configuration file recorded in your log folder during training. Per example, if you want to test a model and your log path has been set to /home/logs, you should specify the following path: /home/logs/carrada/tmvanet/name_of_the_model/config.json. This way, you should execute the following command lines:

$ cd MVRSS/mvrss/
$ python test.py --cfg /home/logs/carrada/tmvanet/name_of_the_model/config.json

Note: the current implementation of this script will generate qualitative results in your log folder. You can disable this behavior by setting get_quali=False in the parameters of the predict() method of the Tester() class.

Acknowledgements

License

The MVRSS repo is released under the Apache 2.0 license.

You might also like...
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018

Learning Pixel-level Semantic Affinity with Image-level Supervision This code is deprecated. Please see https://github.com/jiwoon-ahn/irn instead. Int

[CVPR'21] Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation
[CVPR'21] Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation

Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation Weixiang Yang, Qi Li, Wenxi Liu, Yuanlong Yu, Y

 Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP
Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP

Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP Abstract: We introduce a method that allows to automatically se

TorchDistiller - a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

This project is a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

Mae segmentation - Reproduction of semantic segmentation using masked autoencoder (mae)

ADE20k Semantic segmentation with MAE Getting started Install the mmsegmentation

Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019)
Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019)

Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019) Introduction Official implementation of Dynamic Multi-scale Filters for Semant

Reimplementation of Dynamic Multi-scale filters for Semantic Segmentation.

Paddle implementation of Dynamic Multi-scale filters for Semantic Segmentation.

PyTorch code for the paper "Complementarity is the King: Multi-modal and Multi-grained Hierarchical Semantic Enhancement Network for Cross-modal Retrieval".

Complementarity is the King: Multi-modal and Multi-grained Hierarchical Semantic Enhancement Network for Cross-modal Retrieval (M2HSE) PyTorch code fo

Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation
Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation

Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation. Generally, MAS methods register multiple atlases, i.e., medical images with corresponding labels, to a target image;

Comments
  • Sensor set up

    Sensor set up

    Hi, in the paper section 2.1 Automotive radar sensing, you say that -

    With conventional FMCW radars the RAD tensor is usually not available as it is too computing intensive to estimate.

    so what is difference between conventional FMCW and others FMCW radar?

    In addition, what CARRADA dataset camera and radar sensor setup? and the network cost time (ms) is possible to on-road online?

    Thanks you, hope you can give me some advice.

    opened by enting8696 1
  • metrics calculation on some frames without foreground pixels

    metrics calculation on some frames without foreground pixels

    Hi, I have a question about the calculation of some metrics including IoU, DICE, precision, and recall. In your codes I think you add all frames' confusion matrix together to have the metrics you want. But I found that the dataset contains some frames without any foreground pixels, for example:

    Screen Shot 2021-07-16 at 9 53 27 PM

    The frame without foreground pixel will give a 0 value for the above metrics. So I am afraid the performance of the model is actually underestimated. I wonder if it is more reasonable to exclude frames without the foreground pixel?

    opened by james20141606 1
  • test results.

    test results.

    Thanks for your great work. When I use your pretrained weight in test.py. I can only get mIoU 58.2 in test_result.json file and 12 percentage points worse than the metrics in the result.json file. Can you help me with the confusion?

    opened by sutiankang 0
Releases(v0.1)
Owner
valeo.ai
We are an international team based in Paris, conducting AI research for Valeo automotive applications, in collaboration with world-class academics.
valeo.ai
Recognize numbers from an (28 x 28) image using neural networks

Number recognition Recognize numbers from a 28 x 28 image using neural networks Usage This is an example of a simple usage of number-recognition NOTE:

Mauro Baladés 2 Dec 29, 2021
Towards Representation Learning for Atmospheric Dynamics (AtmoDist)

Towards Representation Learning for Atmospheric Dynamics (AtmoDist) The prediction of future climate scenarios under anthropogenic forcing is critical

Sebastian Hoffmann 4 Dec 15, 2022
Sionna: An Open-Source Library for Next-Generation Physical Layer Research

Sionna: An Open-Source Library for Next-Generation Physical Layer Research Sionna™ is an open-source Python library for link-level simulations of digi

NVIDIA Research Projects 313 Dec 22, 2022
A computer vision pipeline to identify the "icons" in Christian paintings

Christian-Iconography A computer vision pipeline to identify the "icons" in Christian paintings. A bit about iconography. Iconography is related to id

Rishab Mudliar 3 Jul 30, 2022
Neural Network Libraries

Neural Network Libraries Neural Network Libraries is a deep learning framework that is intended to be used for research, development and production. W

Sony 2.6k Dec 30, 2022
DeepStochlog Package For Python

DeepStochLog Installation Installing SWI Prolog DeepStochLog requires SWI Prolog to run. Run the following commands to install: sudo apt-add-repositor

KU Leuven Machine Learning Research Group 17 Dec 23, 2022
Papers about explainability of GNNs

Papers about explainability of GNNs

Dongsheng Luo 236 Jan 04, 2023
Baseline for the Spoofing-aware Speaker Verification Challenge 2022

Introduction This repository contains several materials that supplements the Spoofing-Aware Speaker Verification (SASV) Challenge 2022 including: calc

40 Dec 28, 2022
It's a implement of this paper:Relation extraction via Multi-Level attention CNNs

Relation Classification via Multi-Level Attention CNNs It's a implement of this paper:Relation Classification via Multi-Level Attention CNNs. Training

Aybss 2 Nov 04, 2022
Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment"

DSN-IQA Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment" Requirements Python =3.8.0 Pytorch =1.7.1 Usage wit

7 Oct 13, 2022
The dataset of tweets pulling from Twitters with keyword: Hydroxychloroquine, location: US, Time: 2020

HCQ_Tweet_Dataset: FREE to Download. Keywords: HCQ, hydroxychloroquine, tweet, twitter, COVID-19 This dataset is associated with the paper "Understand

2 Mar 16, 2022
Code and model benchmarks for "SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology"

NeurIPS 2020 SEVIR Code for paper: SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology Requirement

USAF - MIT Artificial Intelligence Accelerator 46 Dec 15, 2022
An unofficial styleguide and best practices summary for PyTorch

A PyTorch Tools, best practices & Styleguide This is not an official style guide for PyTorch. This document summarizes best practices from more than a

IgorSusmelj 1.5k Jan 05, 2023
Official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Recognition" in AAAI2022.

AimCLR This is an official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Reco

Gty 44 Dec 17, 2022
LIMEcraft: Handcrafted superpixel selectionand inspection for Visual eXplanations

LIMEcraft LIMEcraft: Handcrafted superpixel selectionand inspection for Visual eXplanations The LIMEcraft algorithm is an explanatory method based on

MI^2 DataLab 4 Aug 01, 2022
A PyTorch implementation of "SimGNN: A Neural Network Approach to Fast Graph Similarity Computation" (WSDM 2019).

SimGNN ⠀⠀⠀ A PyTorch implementation of SimGNN: A Neural Network Approach to Fast Graph Similarity Computation (WSDM 2019). Abstract Graph similarity s

Benedek Rozemberczki 534 Dec 25, 2022
[ICCV2021] Learning to Track Objects from Unlabeled Videos

Unsupervised Single Object Tracking (USOT) 🌿 Learning to Track Objects from Unlabeled Videos Jilai Zheng, Chao Ma, Houwen Peng and Xiaokang Yang 2021

53 Dec 28, 2022
An efficient 3D semantic segmentation framework for Urban-scale point clouds like SensatUrban, Campus3D, etc.

An efficient 3D semantic segmentation framework for Urban-scale point clouds like SensatUrban, Campus3D, etc.

Zou 33 Jan 03, 2023
Code for "Learning to Regrasp by Learning to Place"

Learning2Regrasp Learning to Regrasp by Learning to Place, CoRL 2021. Introduction We propose a point-cloud-based system for robots to predict a seque

Shuo Cheng (成硕) 18 Aug 27, 2022
A PyTorch implementation of SIN: Superpixel Interpolation Network

SIN: Superpixel Interpolation Network This is is a PyTorch implementation of the superpixel segmentation network introduced in our PRICAI-2021 paper:

6 Sep 28, 2022