Immortal tracker

Overview

Immortal_tracker

Prerequisite

Our code is tested for Python 3.6.
To install required liabraries:

pip install -r requirements.txt

Waymo Open Dataset

Prepare dataset & off-the-shelf detections

Download WOD perception dataset:

#Waymo Dataset         
└── waymo
       ├── training (not required)  
       ├── validation   
       ├── testing 

To extract timestamp infos/ego infos from .tfrecord files, run the following:

bash preparedata/waymo/waymo_preparedata.sh  /
   
    /waymo

   

Run the following to convert detection results into to .npz files. The detection results should be in official WOD submission format(.bin)
We recommand you to use CenterPoint(two-frame model for tracking) detection results for reproducing our results. Please follow https://github.com/tianweiy/CenterPoint or email its author for CenterPoint detection results.

bash preparedata/waymo/waymo_convert_detection.sh 
   
    /detection_result.bin cp

#you can also use other detections:
#bash preparedata/waymo/waymo_convert_detection.sh 
     
     

     
    
   

Inference

Use the following command to start inferencing on WOD. The validation set is used by default.

python main_waymo.py --name immortal --det_name cp --config_path configs/waymo_configs/immortal.yaml --process 8

Evaluation with WOD official devkit:

Follow https://github.com/waymo-research/waymo-open-dataset to build the evaluation tools and run the following command for evaluation:

#Convert the tracking results into .bin file
python evaluation/waymo/pred_bin.py --name immortal
#For evaluation

   
    /bazel-bin/waymo_open_dataset/metrics/tools/compute_tracking_metrics_main mot_results/waymo/validation/immortal/bin/pred.bin 
    
     /validation_gt.bin

    
   

nuScenes Dataset

Prepare dataset & off-the-shelf detections

Download nuScenes perception dataset

# For nuScenes Dataset         
└── NUSCENES_DATASET_ROOT
       ├── samples       
       ├── sweeps       
       ├── maps         
       ├── v1.0-trainval 
       ├── v1.0-test

To extract timestamp infos/ego infos, run the following:

bash preparedata/nuscenes/nu_preparedata.sh 
   
    /nuscenes

   

Run the following to convert detection results into to .npz files. The detection results should be in official nuScenes submission format(.json)
We recommand you to use centerpoint(two-frame model for tracking) detection results for reproducing our results.

bash preparedata/nuscenes/nu_convert_detection.sh  
   
    /detection_result.json cp

#you can also use other detections:
#bash preparedata/nuscenes/nu_convert_detection.sh 
     
     

     
    
   

Inference

Use the following command to start inferencing on nuScenes. The validation set is used by default.

python main_nuscenes.py --name immortal --det_name cp --config_path configs/nu_configs/immortal.yaml --process 8

Evaluation with nuScenes official devkit:

Follow https://github.com/nutonomy/nuscenes-devkit to build the official evaluation tools for nuScenes. Run the following command for evaluation:

/nuscenes ">
#To convert tracking results into .json format
bash evaluation/nuscenes/pipeline.sh immortal
#To evaluate
python 
   
    /nuscenes-devkit/python-sdk/nuscenes/eval/tracking/evaluate.py \
"./mot_results/nuscenes/validation_2hz/immortal/results/results.json" \
--output_dir "./mot_results/nuscenes/validation_2hz/immortal/results" \
--eval_set "val" \
--dataroot 
    
     /nuscenes

    
   
"Projelerle Yapay Zeka Ve Bilgisayarlı Görü" Kitabımın projeleri

"Projelerle Yapay Zeka Ve Bilgisayarlı Görü" Kitabımın projeleri Bu Github Reposundaki tüm projeler; kaleme almış olduğum "Projelerle Yapay Zekâ ve Bi

Ümit Aksoylu 4 Aug 03, 2022
Implementation of hyperparameter optimization/tuning methods for machine learning & deep learning models

Hyperparameter Optimization of Machine Learning Algorithms This code provides a hyper-parameter optimization implementation for machine learning algor

Li Yang 1.1k Dec 19, 2022
U-Net Implementation: Convolutional Networks for Biomedical Image Segmentation" using the Carvana Image Masking Dataset in PyTorch

U-Net Implementation By Christopher Ley This is my interpretation and implementation of the famous paper "U-Net: Convolutional Networks for Biomedical

Christopher Ley 1 Jan 06, 2022
[IROS2021] NYU-VPR: Long-Term Visual Place Recognition Benchmark with View Direction and Data Anonymization Influences

NYU-VPR This repository provides the experiment code for the paper Long-Term Visual Place Recognition Benchmark with View Direction and Data Anonymiza

Automation and Intelligence for Civil Engineering (AI4CE) Lab @ NYU 22 Sep 28, 2022
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning

Awesome production machine learning This repository contains a curated list of awesome open source libraries that will help you deploy, monitor, versi

The Institute for Ethical Machine Learning 12.9k Jan 04, 2023
Stacked Recurrent Hourglass Network for Stereo Matching

SRH-Net: Stacked Recurrent Hourglass Introduction This repository is supplementary material of our RA-L submission, which helps reviewers to understan

28 Jan 03, 2023
Code and models for "Rethinking Deep Image Prior for Denoising" (ICCV 2021)

DIP-denosing This is a code repo for Rethinking Deep Image Prior for Denoising (ICCV 2021). Addressing the relationship between Deep image prior and e

Computer Vision Lab. @ GIST 36 Dec 29, 2022
Official implementation of Unfolded Deep Kernel Estimation for Blind Image Super-resolution.

Unfolded Deep Kernel Estimation for Blind Image Super-resolution Hongyi Zheng, Hongwei Yong, Lei Zhang, "Unfolded Deep Kernel Estimation for Blind Ima

Z80 15 Dec 26, 2022
🚀 An end-to-end ML applications using PyTorch, W&B, FastAPI, Docker, Streamlit and Heroku

🚀 An end-to-end ML applications using PyTorch, W&B, FastAPI, Docker, Streamlit and Heroku

Made With ML 82 Jun 26, 2022
YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with ONNX, TensorRT, ncnn, and OpenVINO supported.

Introduction YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and ind

7.7k Jan 03, 2023
A multi-scale unsupervised learning for deformable image registration

A multi-scale unsupervised learning for deformable image registration Shuwei Shao, Zhongcai Pei, Weihai Chen, Wentao Zhu, Xingming Wu and Baochang Zha

ShuweiShao 2 Apr 13, 2022
Hooks for VCOCO

Verbs in COCO (V-COCO) Dataset This repository hosts the Verbs in COCO (V-COCO) dataset and associated code to evaluate models for the Visual Semantic

Saurabh Gupta 131 Nov 24, 2022
Demo project for real time anomaly detection using kafka and python

kafkaml-anomaly-detection Project for real time anomaly detection using kafka and python It's assumed that zookeeper and kafka are running in the loca

Rodrigo Arenas 36 Dec 12, 2022
SwinIR: Image Restoration Using Swin Transformer

SwinIR: Image Restoration Using Swin Transformer This repository is the official PyTorch implementation of SwinIR: Image Restoration Using Shifted Win

Jingyun Liang 2.4k Jan 05, 2023
Over-the-Air Ensemble Inference with Model Privacy

Over-the-Air Ensemble Inference with Model Privacy This repository contains simulations for our private ensemble inference method. Installation Instal

Selim Firat Yilmaz 1 Jun 29, 2022
A universal memory dumper using Frida

Fridump Fridump (v0.1) is an open source memory dumping tool, primarily aimed to penetration testers and developers. Fridump is using the Frida framew

551 Jan 07, 2023
Neighborhood Reconstructing Autoencoders

Neighborhood Reconstructing Autoencoders The official repository for Neighborhood Reconstructing Autoencoders (Lee, Kwon, and Park, NeurIPS 2021). T

Yonghyeon Lee 24 Dec 14, 2022
Sequential GCN for Active Learning

Sequential GCN for Active Learning Please cite if using the code: Link to paper. Requirements: python 3.6+ torch 1.0+ pip libraries: tqdm, sklearn, sc

45 Dec 26, 2022
Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

Jia Research Lab 115 Dec 23, 2022