Traffic4D: Single View Reconstruction of Repetitious Activity Using Longitudinal Self-Supervision

Overview

Traffic4D: Single View Reconstruction of Repetitious Activity Using Longitudinal Self-Supervision

Project | PDF | Poster
Fangyu Li, N. Dinesh Reddy, Xudong Chen and Srinivasa G. Narasimhan
Proceedings of IEEE Intelligent Vehicles Symposium (IV'21)
Best Paper Award

Following instructions below, the user will get keypoints, trajectory reconstruction and vehicular activity clustering results like

Set up

The set up process can be skipped if using docker. Please check "Docker" section.

Python

Python version 3.6.9 is used. Python packages are in requirements.txt .

git clone https://github.com/Emrys-Lee/Traffic4D-Release.git
sudo apt-get install python3.6
sudo apt-get install python3-pip
cd Traffic4D-Release
pip3 install -r requirements.txt

C++

Traffic4D uses C++ libraries ceres and pybind for efficient optimization. pybind needs clang compiler, so Traffic4D uses clang compiler.

Install clang compiler

sudo apt-get install clang++-6.0

Install prerequisites for ceres

# CMake
sudo apt-get install cmake
# google-glog + gflags
sudo apt-get install libgoogle-glog-dev libgflags-dev
# BLAS & LAPACK
sudo apt-get install libatlas-base-dev
# Eigen3
sudo apt-get install libeigen3-dev
# SuiteSparse and CXSparse (optional)
sudo apt-get install libsuitesparse-dev

Download and install ceres

wget https://github.com/ceres-solver/ceres-solver/archive/1.12.0.zip
unzip 1.12.0.zip
cd ceres-solver-1.12.0/
mkdir build
cd build
cmake ..
make
sudo make install

Download and install pybind

git clone https://github.com/pybind/pybind11
cd pybind11
cmake .
make
sudo make install

Build Traffic4D optimization library

cd Traffic4D-Release/src/ceres
make

ceres_reconstruct.so and ceres_spline.so are generated under path Traffic4D-Release/src/ceres/.

Dataset

Download dataset and pre-generated results from here, and put it under Traffic4D-Release/.

cd Traffic4D-Release
mv Data-Traffic4D.zip ./
unzip Data-Traffic4D.zip

The directory should be like

Traffic4D-Release/
    Data-Traffic4D/
    └───fifth_morewood/
        └───fifth_morewood_init.vd
        └───top_view.png
        └───images/
                00001.jpg
                00002.jpg
                ...
                06288.jpg
    └───arterial_kennedy/
        └───arterial_kennedy_init.vd
        └───top_view.png
        └───images/
                <put AI City Challenge frames here>
        ...

The input and output paths can be modified in config/*.yml.

Explanation

1. Input videos

Sample videos in Traffic4D are provided. Note arterial_kennedy and dodge_century are from Nvidia AI City Challenge City-Scale Multi-Camera Vehicle Tracking Challenge Track. Please request the access to the dataset here. Once get the data, run

ffmpeg -i <mtmc-dir>/train/S01/c001/vdo.avi Traffic4D-Release/Data-Traffic4D/arterial_kennedy/images/%05d.jpg
ffmpeg -i <mtmc-dir>/test/S02/c007/vdo.avi Traffic4D-Release/Data-Traffic4D/dodge_century/images/%05d.jpg

to extract frames into images/.

2. Pre-Generated 2D results

Detected 2D bounding boxes, keypoints and tracking IDs are stored in *_init.vd. Check Occlusionnet implementation for detecting keypoints; V-IOU for multi-object tracking.

3. Output folder

Folder Traffic4D-Release/Result/ will be created by default.

Experiments

Run python exp/traffic4d.py config/<intersection_name>.yml <action>. Here YML configuration files for multiple intersections are provided under config/ folder. <action> shoulbe be reconstruction or clustering to perform longitudinal reconstruction and activity clustering sequentially. For example, below runs Fifth and Morewood intersection.

cd Traffic4D-Release
python3 exp/traffic4d.py config/fifth_morewood.yml reconstruction
python3 exp/traffic4d.py config/fifth_morewood.yml clustering

Results

Find these results in the output folder:

  1. 2D keypoints: If 3D reconstruction is done, 2D reprojected keypoints will be plotted in Traffic4D-Release/Result/<intersection_name>_keypoints/.
  2. 3D reconstructed trajectories and clusters: The clustered 3D trajectories are plotted on the top view map as Traffic4D-Release/Result/<intersection_name>_top_view.jpg.

Docker

We provide docker image with dependencies already set up. The steps in "Set up" can be skipped if you use docker image. You still need to clone the repo and download the dataset and put it in under Traffic4D-Release/.

git clone https://github.com/Emrys-Lee/Traffic4D-Release.git

Pull Traffic4D docker image.

docker pull emrysli/traffic4d-release:latest

Then create a container and map the git repo into docker container to access the dataset. For example, if the cloned repo locates at host directory /home/xxx/Traffic4D-Release, <path_to_repo> should be /home/xxx. If <path_in_container> is /home/yyy, then /home/xxx/Traffic4D-Release will be mapped as /home/yyy/Traffic4D-Release inside the container.

docker run -it -v <path_to_repo>/Traffic4D-Release:<path_in_container>/Traffic4D-Release emrysli/traffic4d-release:latest /bin/bash

Inside container compile Traffic4D again.

# inside container
cd <path_in_container>/Traffic4D-Release/src/ceres
make

Run experiments.

cd <path_in_container>/Traffic4D-Release
python3 exp/traffic4d.py config/fifth_morewood.yml reconstruction
python3 exp/traffic4d.py config/fifth_morewood.yml clustering

Trouble Shooting

  1. tkinter module is missing
File "/usr/local/lib/python3.6/dist-packages/matplotlib/backends/_backend_tk.py", line 5, in <module>
    import tkinter as Tk
ModuleNotFoundError: No module named 'tkinter'

Solution: install tkinter.

sudo apt-get install python3-tk
  1. opencv import error such as
File "/usr/local/lib/python3.6/dist-packages/cv2/__init__.py", line 3, in <module>
    from .cv2 import *
ImportError: libSM.so.6: cannot open shared object file: No such file or directory

Solution: install the missing libraries.

sudo apt-get install libsm6 libxrender1 libfontconfig1 libxext6

Citation

Traffic4D

@conference{Li-2021-127410,
author = {Fangyu Li and N. Dinesh Reddy and Xudong Chen and Srinivasa G. Narasimhan},
title = {Traffic4D: Single View Reconstruction of Repetitious Activity Using Longitudinal Self-Supervision},
booktitle = {Proceedings of IEEE Intelligent Vehicles Symposium (IV '21)},
year = {2021},
month = {July},
publisher = {IEEE},
keywords = {Self-Supervision, vehicle Detection, 4D Reconstruction, 3D reconstuction, Pose Estimation.},
}

Occlusion-Net

@inproceedings{onet_cvpr19,
author = {Reddy, N. Dinesh and Vo, Minh and Narasimhan, Srinivasa G.},
title = {Occlusion-Net: 2D/3D Occluded Keypoint Localization Using Graph Networks},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
pages = {7326--7335},
year = {2019}
}
Classification models 1D Zoo - Keras and TF.Keras

Classification models 1D Zoo - Keras and TF.Keras This repository contains 1D variants of popular CNN models for classification like ResNets, DenseNet

Roman Solovyev 12 Jan 06, 2023
Deep Q-Learning Network in pytorch (not actively maintained)

pytoch-dqn This project is pytorch implementation of Human-level control through deep reinforcement learning and I also plan to implement the followin

Hung-Tu Chen 342 Jan 01, 2023
Le dataset des images du projet d'IA de 2021

face-mask-dataset-ilc-2021 Le dataset des images du projet d'IA de 2021, Indiquez vos id git dans la issue pour les droits TL;DR: Choisir 200 images J

7 Nov 15, 2021
ML for NLP and Computer Vision.

Sparrow is our open-source ML product. It runs on Skipper MLOps infrastructure.

Katana ML 2 Nov 28, 2021
A graph-to-sequence model for one-step retrosynthesis and reaction outcome prediction.

Graph2SMILES A graph-to-sequence model for one-step retrosynthesis and reaction outcome prediction. 1. Environmental setup System requirements Ubuntu:

29 Nov 18, 2022
[EMNLP 2021] Distantly-Supervised Named Entity Recognition with Noise-Robust Learning and Language Model Augmented Self-Training

RoSTER The source code used for Distantly-Supervised Named Entity Recognition with Noise-Robust Learning and Language Model Augmented Self-Training, p

Yu Meng 60 Dec 30, 2022
EfficientNetV2 implementation using PyTorch

EfficientNetV2-S implementation using PyTorch Train Steps Configure imagenet path by changing data_dir in train.py python main.py --benchmark for mode

Jahongir Yunusov 86 Dec 29, 2022
Code for a real-time distributed cooperative slam(RDC-SLAM) system for ROS compatible platforms.

RDC-SLAM This repository contains code for a real-time distributed cooperative slam(RDC-SLAM) system for ROS compatible platforms. The system takes in

40 Nov 19, 2022
A simple code to perform canny edge contrast detection on images.

CECED-Canny-Edge-Contrast-Enhanced-Detection A simple code to perform canny edge contrast detection on images. A simple code to process images using c

Happy N. Monday 3 Feb 15, 2022
A 2D Visual Localization Framework based on Essential Matrices [ICRA2020]

A 2D Visual Localization Framework based on Essential Matrices This repository provides implementation of our paper accepted at ICRA: To Learn or Not

Qunjie Zhou 27 Nov 07, 2022
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Erik Linder-Norén 21.8k Jan 09, 2023
The official codes for the ICCV2021 presentation "Uniformity in Heterogeneity: Diving Deep into Count Interval Partition for Crowd Counting"

UEPNet (ICCV2021 Poster Presentation) This repository contains codes for the official implementation in PyTorch of UEPNet as described in Uniformity i

Tencent YouTu Research 15 Dec 14, 2022
A curated list of resources for Image and Video Deblurring

A curated list of resources for Image and Video Deblurring

Subeesh Vasu 1.7k Jan 01, 2023
We envision models that are pre-trained on a vast range of domain-relevant tasks to become key for molecule property prediction

We envision models that are pre-trained on a vast range of domain-relevant tasks to become key for molecule property prediction. This repository aims to give easy access to state-of-the-art pre-train

GMUM 90 Jan 08, 2023
Just Go with the Flow: Self-Supervised Scene Flow Estimation

Just Go with the Flow: Self-Supervised Scene Flow Estimation Code release for the paper Just Go with the Flow: Self-Supervised Scene Flow Estimation,

Himangi Mittal 50 Nov 22, 2022
Official implementation of the network presented in the paper "M4Depth: A motion-based approach for monocular depth estimation on video sequences"

M4Depth This is the reference TensorFlow implementation for training and testing depth estimation models using the method described in M4Depth: A moti

Michaël Fonder 76 Jan 03, 2023
Repository for the paper "From global to local MDI variable importances for random forests and when they are Shapley values"

From global to local MDI variable importances for random forests and when they are Shapley values Antonio Sutera ( Antonio Sutera 3 Feb 23, 2022

Pytorch implementation of Compressive Transformers, from Deepmind

Compressive Transformer in Pytorch Pytorch implementation of Compressive Transformers, a variant of Transformer-XL with compressed memory for long-ran

Phil Wang 118 Dec 01, 2022
Joint parameterization and fitting of stroke clusters

StrokeStrip: Joint Parameterization and Fitting of Stroke Clusters Dave Pagurek van Mossel1, Chenxi Liu1, Nicholas Vining1,2, Mikhail Bessmeltsev3, Al

Dave Pagurek 44 Dec 01, 2022
Where2Act: From Pixels to Actions for Articulated 3D Objects

Where2Act: From Pixels to Actions for Articulated 3D Objects The Proposed Where2Act Task. Given as input an articulated 3D object, we learn to propose

Kaichun Mo 69 Nov 28, 2022