Score refinement for confidence-based 3D multi-object tracking

Related tags

Deep LearningCBMOT
Overview

Score refinement for confidence-based 3D multi-object tracking

Our video gives a brief explanation of our Method.

This is the official code for the paper:

Score refinement for confidence-based 3D multi-object tracking,
Nuri Benbarka, Jona Schröder, Andreas Zell,
arXiv technical report (arXiv 2107.04327)

@article{benbarka2021score,
    title={Score refinement for confidence-based 3D multi-object tracking},
    author={Benbarka, Nuri and Schr{\"o}der, Jona and Zell, Andreas},
    journal={arXiv preprint arXiv:2107.04327},
    year={2021}
}

It also contains the code of the B.Sc. thesis:

Learning score update functions for confidence-based MOT, Anouar Gherri,

@article{gherri2021learning,
    title = {Learning score update functions for confidence-based MOT},
    author = {Gherri, Anouar},
    year = {2021}        
}

Contact

Feel free to contact us for any questions!

Nuri Benbarka [email protected],

Jona Schröder [email protected],

Anouar Gherri [email protected],

Abstract

Multi-object tracking is a critical component in autonomous navigation, as it provides valuable information for decision-making. Many researchers tackled the 3D multi-object tracking task by filtering out the frame-by-frame 3D detections; however, their focus was mainly on finding useful features or proper matching metrics. Our work focuses on a neglected part of the tracking system: score refinement and tracklet termination. We show that manipulating the scores depending on time consistency while terminating the tracklets depending on the tracklet score improves tracking results. We do this by increasing the matched tracklets' score with score update functions and decreasing the unmatched tracklets' score. Compared to count-based methods, our method consistently produces better AMOTA and MOTA scores when utilizing various detectors and filtering algorithms on different datasets. The improvements in AMOTA score went up to 1.83 and 2.96 in MOTA. We also used our method as a late-fusion ensembling method, and it performed better than voting-based ensemble methods by a solid margin. It achieved an AMOTA score of 67.6 on nuScenes test evaluation, which is comparable to other state-of-the-art trackers.

Results

NuScenes

Detector Split Update function modality AMOTA AMOTP MOTA
CenterPoint Val - Lidar 67.3 57.4 57.3
CenterTrack Val - Camera 17.8 158.0 15.0
CenterPoint Val Multiplication Lidar 68.8 58.9 60.2
CenterPoint + CenterTrack Val Multiplication Fusion 72.1 53.3 58.5
CenterPoint + CenterTrack Val Neural network Fusion 72.0 48.7 58.2

The results are different than what is reported in the paper because of optimizing NUSCENE_CLS_VELOCITY_ERRORs, and using the new detection results from CenterPoint.

Installation

# basic python libraries
conda create --name CBMOT python=3.7
conda activate CBMOT
git clone https://github.com/cogsys-tuebingen/CBMOT.git
cd CBMOT
pip install -r requirements.txt

Create a folder to place the dataset called data. Download the NuScenes dataset and then prepare it as was instructed in nuScenes devkit. Make a hyperlink that points to the prepared dataset.

mkdir data
cd data
ln -s  LINK_TO_NUSCENES_DATA_SET ./nuScenes
cd ..

Ceate a folder named resources.

mkdir resources

Download the detections/tracklets and place them in the resources folder. We used CenterPoint detections (LIDAR) and CenterTrack tracklets (Camera). If you don't want to run CenterTrack yourself, we have the tracklets here. For the experiment with the learned score update function, please download the network's weights from here.

Usage

We made a bash script Results.sh to get the result table above. Running the script should take approximately 4 hours.

bash Results.sh

Learning update function model

In the directory learning_score_update_function

  • open lsuf_train
  • put your CMOT project path into CMOT_path
  • run the file to generate the model from the best results
  • feel free to experiment yourself different parameters

Acknowledgment

This project is not possible without multiple great open sourced codebases. We list some notable examples below.

CBMOT is deeply influenced by the following projects. Please consider citing the relevant papers.

@article{zhu2019classbalanced,
  title={Class-balanced Grouping and Sampling for Point Cloud 3D Object Detection},
  author={Zhu, Benjin and Jiang, Zhengkai and Zhou, Xiangxin and Li, Zeming and Yu, Gang},
  journal={arXiv:1908.09492},
  year={2019}
}

@article{lang2019pillar,
   title={PointPillars: Fast Encoders for Object Detection From Point Clouds},
   journal={CVPR},
   author={Lang, Alex H. and Vora, Sourabh and Caesar, Holger and Zhou, Lubing and Yang, Jiong and Beijbom, Oscar},
   year={2019},
}

@inproceedings{yin2021center,
  title={Center-based 3d object detection and tracking},
  author={Yin, Tianwei and Zhou, Xingyi and Krahenbuhl, Philipp},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={11784--11793},
  year={2021}
}

@article{zhou2020tracking,
  title={Tracking Objects as Points},
  author={Zhou, Xingyi and Koltun, Vladlen and Kr{\"a}henb{\"u}hl, Philipp},
  journal={arXiv:2004.01177},
  year={2020}
}

@inproceedings{weng20203d,
  title={3d multi-object tracking: A baseline and new evaluation metrics},
  author={Weng, Xinshuo and Wang, Jianren and Held, David and Kitani, Kris},
  booktitle={2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  pages={10359--10366},
  year={2020},
  organization={IEEE}
}

@article{chiu2020probabilistic,
  title={Probabilistic 3D Multi-Object Tracking for Autonomous Driving},
  author={Chiu, Hsu-kuang and Prioletti, Antonio and Li, Jie and Bohg, Jeannette},
  journal={arXiv preprint arXiv:2001.05673},
  year={2020}
}

Owner
Cognitive Systems Research Group
Autonomous Mobile Robots; Bioinformatics; Chemo- and Geoinformatics; Evolutionary Algorithms; Machine Learning
Cognitive Systems Research Group
Simple-System-Convert--C--F - Simple System Convert With Python

Simple-System-Convert--C--F REQUIREMENTS Python version : 3 HOW TO USE Run the c

Jonathan Santos 2 Feb 16, 2022
A deep learning network built with TensorFlow and Keras to classify gender and estimate age.

Convolutional Neural Network (CNN). This repository contains a source code of a deep learning network built with TensorFlow and Keras to classify gend

Pawel Dziemiach 1 Dec 18, 2021
基于AlphaPose的TensorRT加速

1. Requirements CUDA 11.1 TensorRT 7.2.2 Python 3.8.5 Cython PyTorch 1.8.1 torchvision 0.9.1 numpy 1.17.4 (numpy版本过高会出报错 this issue ) python-package s

52 Dec 06, 2022
Bayesian Image Reconstruction using Deep Generative Models

Bayesian Image Reconstruction using Deep Generative Models R. Marinescu, D. Moyer, P. Golland For technical inquiries, please create a Github issue. F

Razvan Valentin Marinescu 51 Nov 23, 2022
From this paper "SESNet: A Semantically Enhanced Siamese Network for Remote Sensing Change Detection"

SESNet for remote sensing image change detection It is the implementation of the paper: "SESNet: A Semantically Enhanced Siamese Network for Remote Se

1 May 24, 2022
Self Governing Neural Networks (SGNN): the Projection Layer

Self Governing Neural Networks (SGNN): the Projection Layer A SGNN's word projections preprocessing pipeline in scikit-learn In this notebook, we'll u

Guillaume Chevalier 22 Nov 06, 2022
《Unsupervised 3D Human Pose Representation with Viewpoint and Pose Disentanglement》(ECCV 2020) GitHub: [fig9]

Unsupervised 3D Human Pose Representation [Paper] The implementation of our paper Unsupervised 3D Human Pose Representation with Viewpoint and Pose Di

42 Nov 24, 2022
SAAVN - Sound Adversarial Audio-Visual Navigation,ICLR2022 (In PyTorch)

SAAVN SAAVN Code release for paper "Sound Adversarial Audio-Visual Navigation,IC

YinfengYu 10 Aug 30, 2022
code for Fast Point Cloud Registration with Optimal Transport

robot This is the repository for the paper "Accurate Point Cloud Registration with Robust Optimal Transport". We are in the process of refactoring the

28 Jan 04, 2023
Bayesian optimisation library developped by Huawei Noah's Ark Library

Bayesian Optimisation Research This directory contains official implementations for Bayesian optimisation works developped by Huawei R&D, Noah's Ark L

HUAWEI Noah's Ark Lab 395 Dec 30, 2022
MCMC samplers for Bayesian estimation in Python, including Metropolis-Hastings, NUTS, and Slice

Sampyl May 29, 2018: version 0.3 Sampyl is a package for sampling from probability distributions using MCMC methods. Similar to PyMC3 using theano to

Mat Leonard 304 Dec 25, 2022
InsTrim: Lightweight Instrumentation for Coverage-guided Fuzzing

InsTrim The paper: InsTrim: Lightweight Instrumentation for Coverage-guided Fuzzing Build Prerequisite llvm-8.0-dev clang-8.0 cmake = 3.2 Make git cl

75 Dec 23, 2022
Information-Theoretic Multi-Objective Bayesian Optimization with Continuous Approximations

Information-Theoretic Multi-Objective Bayesian Optimization with Continuous Approximations Requirements The code is implemented in Python and requires

1 Nov 03, 2021
Official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.

Vision Transformer with Progressive Sampling This is the official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.

yuexy 123 Jan 01, 2023
DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene.

DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene. We achieve NeRF-comparable novel-view synthesis quality with super-fast convergence.

sunset 709 Dec 31, 2022
Fre-GAN: Adversarial Frequency-consistent Audio Synthesis

Fre-GAN Vocoder Fre-GAN: Adversarial Frequency-consistent Audio Synthesis Training: python train.py --config config.json Citation: @misc{kim2021frega

Rishikesh (ऋषिकेश) 93 Dec 17, 2022
An e-commerce company wants to segment its customers and determine marketing strategies according to these segments.

customer_segmentation_with_rfm Business Problem : An e-commerce company wants to

Buse Yıldırım 3 Jan 06, 2022
Codebase for Attentive Neural Hawkes Process (A-NHP) and Attentive Neural Datalog Through Time (A-NDTT)

Introduction Codebase for the paper Transformer Embeddings of Irregularly Spaced Events and Their Participants. This codebase contains two packages: a

Alan Yang 28 Dec 12, 2022
PoseViz – Multi-person, multi-camera 3D human pose visualization tool built using Mayavi.

PoseViz – 3D Human Pose Visualizer Multi-person, multi-camera 3D human pose visualization tool built using Mayavi. As used in MeTRAbs visualizations.

István Sárándi 79 Dec 30, 2022
Nested cross-validation is necessary to avoid biased model performance in embedded feature selection in high-dimensional data with tiny sample sizes

Pruner for nested cross-validation - Sphinx-Doc Nested cross-validation is necessary to avoid biased model performance in embedded feature selection i

1 Dec 15, 2021