Implementation of the CVPR 2021 paper "Online Multiple Object Tracking with Cross-Task Synergy"

Related tags

Deep LearningTADAM
Overview

Online Multiple Object Tracking with Cross-Task Synergy

This repository is the implementation of the CVPR 2021 paper "Online Multiple Object Tracking with Cross-Task Synergy" Structure of TADAM

Installation

Tested on python=3.8 with torch=1.8.1 and torchvision=0.9.1.

It should also be compatible with python>=3.6, torch>=1.4.0 and torchvision>=0.4.0. Not tested on lower versions.

1. Clone the repository

git clone https://github.com/songguocode/TADAM.git

2. Create conda env and activate

conda create -n TADAM python=3.8
conda activate TADAM

3. Install required packages

pip install torch torchvision scipy opencv-python yacs

All models are set to run on GPU, thus make sure graphics card driver is properly installed, as well as CUDA.

To check if torch is running with CUDA, run in python:

import torch
torch.cuda.is_available()

It is working if True is returned.

See PyTorch Official Site if torch is not installed or working properly.

4. Clone MOTChallenge benchmark evaluation code

git clone https://github.com/JonathonLuiten/TrackEval.git

By now there should be two folders, TADAM and TrackEval.

Refer to MOTChallenge-Official for instructions.

Download the provided data.zip, unzip as folder data and copy inside TrackEval as TrackEva/data.

Move into TADAM folder

cd TADAM

5. Prepare MOTChallenge data

Download MOT16, MOT17, MOT17Det, and MOT20 and place them inside a datasets folder.

Two options to provide datasets location for training/testing:

  • a. Add a symbolic link inside TADAM folder by ln -s path_of_datasets datasets
  • b. In TADAM/configs/config.py, assign __C.PATHS.DATASET_ROOT with path_of_datasets

6. Download Models

The training base of TADAM is a detector pretrained on COCO. The base model coco_checkpoint.pth is provided in Google Drive

Trained models are also provided for reference:

  • TADAM_MOT16.pth
  • TADAM_MOT17.pth
  • TADAM_MOT20.pth

Create a folder output/models and place all models inside.

Train

  1. Training on single GPU, for MOT17 as an example
python -m lib.training.train TADAM_MOT17 --config TADAM_MOT17

First TADAM_MOT17 specifies the output name of the trained model, which can be changed as preferred.

Second TADAM_MOT17 refers to the config file lib/configs/TADAM_MOT17.yaml that loads training parameters. Switch config for respective dataset training. Config files are located in lib/configs.

  1. Training on multiple GPU with Distributed Data Parallel
OMP_NUM_THREADS=1 python -m torch.distributed.launch --nproc_per_node=2 --use_env -m lib.training.train TADAM_MOT17 --config TADAM_MOT17

Argument --nproc_per_node=2 specifies how many GPUs to be used for training. Here 2 cards are used.

Trained model will be stored inside output/models with the specified output name

Evaluate

python -m lib.tracking.test_tracker --result-name xxx --config TADAM_MOT17 --evaluation

Change xxx to prefered result name. --evaluation toggles on evaluation right after obtaining tracking results. Remove it if only running for results without evaluation. Evaluation requires all sequences results of the specified dataset.

Either run evaluation after training, or download and test the provided trained models.

Note that if output name of the trained model is changed, it must be specified in corresponding .yaml config file's line, i.e. replace value in MODEL: TADAM_MOT17.pth with expected model file name.

Code from TrackEval is used for evaluation, and it is set to run on multiple cores (8 cores) by default.

To run an evaluation after obtaining tracking results (with sequences result files), run:

python -m lib.utils.official_benchmark --result-name xxx --config TADAM_MOT17

Replace xxx with the result name, and choose config accordingly.

Tracking results can be found in output/results under respective dataset name folders. Detailed result is stored in a xxx_detailed.csv file, while the summary is given in a xxx_summary.txt file.

Results for reference

The evaluation results on train sets are given here for reference. See paper for reported test sets results.

  • MOT16
MOTA	MOTP	MODA	CLR_Re	CLR_Pr	MTR	PTR	MLR	CLR_TP	CLR_FN
63.7	91.6	63.9	64.5	99.0	35.6	40.8	23.6	71242	39165
CLR_FP	IDSW	MT	PT	ML	Frag	sMOTA	IDF1	IDR	IDP
689	186	184	211	122	316	58.3	68.0	56.2	86.2
IDTP	IDFN	IDFP	Dets	GT_Dets	IDs	GT_IDs
62013	48394	9918	71931	110407	446	517
  • MOT17
MOTA	MOTP	MODA	CLR_Re	CLR_Pr	MTR	PTR	MLR	CLR_TP	CLR_FN
68.0	91.3	68.2	69.0	98.8	43.5	37.5	19.0	232600	104291
CLR_FP	IDSW	MT	PT	ML	Frag	sMOTA	IDF1	IDR	IDP
2845	742	712	615	311	1182	62.0	71.6	60.8	87.0
IDTP	IDFN	IDFP	Dets	GT_Dets	IDs	GT_IDs
204819	132072	30626	235445	336891	1455	1638
  • MOT20
MOTA	MOTP	MODA	CLR_Re	CLR_Pr	MTR	PTR	MLR	CLR_TP	CLR_FN
80.2	87.0	80.4	82.2	97.9	64.0	28.8	7.18	932899	201715
CLR_FP	IDSW	MT	PT	ML	Frag	sMOTA	IDF1	IDR	IDP
20355	2275	1418	638	159	2737	69.5	72.3	66.5	79.2
IDTP	IDFN	IDFP	Dets	GT_Dets	IDs	GT_IDs
754621	379993	198633	953254	1134614	2953	2215

Results could differ slightly, and small variations should be acceptable.

Visualization

A visualization tool is provided to preview datasets' ground-truths, provided detections, and generated tracking results.

python -m lib.utils.visualization --config TADAM_MOT17 --which-set train --sequence 02 --public-detection FRCNN --result xxx --start-frame 1 --scale 0.8

Specify config files, train/test split, and sequence with --config, --which-set, --sequence respectively. --public-detection should only be specified for MOT17.

Replace --result xxx with the tracking results --start-frame 1 means viewing from frame 1, while --scale 0.8 resizes viewing window with given ratio.

Commands in visualization window:

  • "<": previous frame
  • ">": next frame
  • "t": toggle between viewing ground_truths, provided detections, and tracking results
  • "s": save current frame with all rendered elements
  • "h": hide frame information on window's top-left corner
  • "i": hide identity index on bounding boxes' top-left corner
  • "Esc" or "q": exit program

Pretrain detector on COCO

Basic detector is pretrained on COCO dataset, before training on MOT. A Faster-RCNN FPN with ResNet101 backbone is adopted in this code, which can be replaced by other similar detectors with code modifications.

Refer to Object detection reference training scripts on how to train a PyTorch-based detector.

See Tracking without bells and whistles for a jupyter notebook hands-on, which is also based on the aforementioned reference codes.

Publication

If you use the code in your research, please cite:

@InProceedings{TADAM_2021_CVPR,
    author = {Guo, Song and Wang, Jingya and Wang, Xinchao and Tao, Dacheng},
    title = {Online Multiple Object Tracking With Cross-Task Synergy},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2021},
}
Source code for our paper "Improving Empathetic Response Generation by Recognizing Emotion Cause in Conversations"

Source code for our paper "Improving Empathetic Response Generation by Recognizing Emotion Cause in Conversations" this repository is maintained by bo

Yuhan Liu 24 Nov 29, 2022
Accelerated NLP pipelines for fast inference on CPU and GPU. Built with Transformers, Optimum and ONNX Runtime.

Optimum Transformers Accelerated NLP pipelines for fast inference 🚀 on CPU and GPU. Built with 🤗 Transformers, Optimum and ONNX runtime. Installatio

Aleksey Korshuk 115 Dec 16, 2022
The code of paper "Block Modeling-Guided Graph Convolutional Neural Networks".

Block Modeling-Guided Graph Convolutional Neural Networks This repository contains the demo code of the paper: Block Modeling-Guided Graph Convolution

22 Dec 08, 2022
InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal Artifact Reduction in CT Images

InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal Artifact Reduction in CT Images Hong Wang, Yuexiang Li, Haimiao Zhang, Deyu Men

Hong Wang 4 Dec 27, 2022
Official PyTorch implementation of "BlendGAN: Implicitly GAN Blending for Arbitrary Stylized Face Generation" (NeurIPS 2021)

BlendGAN: Implicitly GAN Blending for Arbitrary Stylized Face Generation Official PyTorch implementation of the NeurIPS 2021 paper Mingcong Liu, Qiang

onion 462 Dec 29, 2022
[arXiv] What-If Motion Prediction for Autonomous Driving ❓🚗💨

WIMP - What If Motion Predictor Reference PyTorch Implementation for What If Motion Prediction [PDF] [Dynamic Visualizations] Setup Requirements The W

William Qi 96 Dec 29, 2022
pytorch implementation of the ICCV'21 paper "MVTN: Multi-View Transformation Network for 3D Shape Recognition"

MVTN: Multi-View Transformation Network for 3D Shape Recognition (ICCV 2021) By Abdullah Hamdi, Silvio Giancola, Bernard Ghanem Paper | Video | Tutori

Abdullah Hamdi 64 Jan 03, 2023
This package contains a PyTorch Implementation of IB-GAN of the submitted paper in AAAI 2021

The PyTorch implementation of IB-GAN model of AAAI 2021 This package contains a PyTorch implementation of IB-GAN presented in the submitted paper (IB-

Insu Jeon 9 Mar 30, 2022
Geometry-Free View Synthesis: Transformers and no 3D Priors

Geometry-Free View Synthesis: Transformers and no 3D Priors Geometry-Free View Synthesis: Transformers and no 3D Priors Robin Rombach*, Patrick Esser*

CompVis Heidelberg 293 Dec 22, 2022
ByteTrack with ReID module following the paradigm of FairMOT, tracking strategy is borrowed from FairMOT/JDE.

ByteTrack_ReID ByteTrack is the SOTA tracker in MOT benchmarks with strong detector YOLOX and a simple association strategy only based on motion infor

Han GuangXin 46 Dec 29, 2022
Catalyst.Detection

Accelerated DL R&D PyTorch framework for Deep Learning research and development. It was developed with a focus on reproducibility, fast experimentatio

Catalyst-Team 12 Oct 25, 2021
Official pytorch implementation of paper Dual-Level Collaborative Transformer for Image Captioning (AAAI 2021).

Dual-Level Collaborative Transformer for Image Captioning This repository contains the reference code for the paper Dual-Level Collaborative Transform

lyricpoem 160 Dec 11, 2022
Implementation of Convolutional LSTM in PyTorch.

ConvLSTM_pytorch This file contains the implementation of Convolutional LSTM in PyTorch made by me and DavideA. We started from this implementation an

Andrea Palazzi 1.3k Dec 29, 2022
Nvidia Semantic Segmentation monorepo

Paper | YouTube | Cityscapes Score Pytorch implementation of our paper Hierarchical Multi-Scale Attention for Semantic Segmentation. Please refer to t

NVIDIA Corporation 1.6k Jan 04, 2023
Non-Homogeneous Poisson Process Intensity Modeling and Estimation using Measure Transport

Non-Homogeneous Poisson Process Intensity Modeling and Estimation using Measure Transport This GitHub page provides code for reproducing the results i

Andrew Zammit Mangion 1 Nov 08, 2021
Count the MACs / FLOPs of your PyTorch model.

THOP: PyTorch-OpCounter How to install pip install thop (now continously intergrated on Github actions) OR pip install --upgrade git+https://github.co

Ligeng Zhu 3.9k Dec 29, 2022
PiRapGenerator - Make anyone rap the digits of pi

PiRapGenerator Make anyone rap the digits of pi (sample files are of Ted Nivison

7 Oct 02, 2022
Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image

Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image (Project page) Zhengqin Li, Mohammad Sha

209 Jan 05, 2023
Unconstrained Text Detection with Box Supervisionand Dynamic Self-Training

SelfText Beyond Polygon: Unconstrained Text Detection with Box Supervisionand Dynamic Self-Training Introduction This is a PyTorch implementation of "

weijiawu 34 Nov 09, 2022
Official code of our work, AVATAR: A Parallel Corpus for Java-Python Program Translation.

AVATAR Official code of our work, AVATAR: A Parallel Corpus for Java-Python Program Translation. AVATAR stands for jAVA-pyThon progrAm tRanslation. AV

Wasi Ahmad 26 Dec 03, 2022