PyTorch implementation of HDN(Homography Decomposition Networks) for planar object tracking

Related tags

Deep LearningHDN
Overview

Homography Decomposition Networks for Planar Object Tracking

This project is the offical PyTorch implementation of HDN(Homography Decomposition Networks) for planar object tracking. (AAAI 2022, Accepted)

Project Page | Paper

@misc{zhan2021homography,
      title={Homography Decomposition Networks for Planar Object Tracking}, 
      author={Xinrui Zhan and Yueran Liu and Jianke Zhu and Yang Li},
      year={2021},
      eprint={2112.07909},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Installation

Please find installation instructions in INSTALL.md.

Quick Start: Using HDN

Add HDN to your PYTHONPATH

vim ~/.bashrc
# add home of project to PYTHONPATH
export PYTHONPATH=/path/to/HDN:/path/to/HDN/homo_estimator/Deep_homography/Oneline_DLTv1:$PYTHONPATH

Download models

Google Drive or Baidu Netdisk (key: 8uhq)

Base Setting

The global parameters setting file is hdn/core/config.py You first need to set the base path:

__C.BASE.PROJ_PATH = /xxx/xxx/project_root/ #/home/Kay/SOT/server_86/HDN/   (path_to_hdn)
__C.BASE.BASE_PATH = /xxx/xxx/ #/home/Kay/SOT/                  (base_path_to_workspace)
__C.BASE.DATA_PATH = /xxx/xxx/data/POT  #/home/Kay/data/POT     (path to POT datasets)
__C.BASE.DATA_ROOT = /xxx/xxx   #/home/Kay/Data/Dataset/        (path to other datasets)

Demo

Planar Object Tracking and its applications we provide 4 modes:

  • tracking: tracking planar object with not less than 4 points in the object.
  • img_replace: replacing planar object with image .
  • video_replace: replacing planar object with video.
  • mosiac: adding mosiac to planar object.
python tools/demo.py 
--snapshot model/hdn-simi-sup-hm-unsup.pth 
--config experiments/tracker_homo_config/proj_e2e_GOT_unconstrained_v2.yaml 
--video demo/door.mp4 
--mode img_replace 
--img_insert demo/coke2.jpg #required in mode 'img_replace'  
--video_insert demo/t5_videos/replace-video/   #required in mode 'video_replace'
--save # whether save the results.

e.g.

python tools/demo.py  --snapshot model/hdn-simi-sup-hm-unsup.pth  --config experiments/tracker_homo_config/proj_e2e_GOT_unconstrained_v2.yaml --video demo/door.mp4 --mode img_replace --img_insert demo/coke2.jpg --save

we provide some real-world videos here

Download testing datasets

POT

For POT dataset, download the videos from POT280 and annotations from here

1. unzip POT_v.zip and POT_annotation.zip and put them in your cfg.BASE.DATA_PATH #unzip the zip files
  cd POT_v
  unzip "*.zip"
  cd ..

2. mkdir POT
   mkdir path_to_hdn/testing_dataset
   python path_to_hdn/toolkit/benchmarks/POT/pot_video_to_pic.py #video to images  
   ln -s path_to_data/POT  path_to_hdn/testing_dataset/POT #link to testing_datasets


4. python path_to_hdn/toolkit/benchmarks/POT/generate_json_for_POT.py --dataset POT210 #generate json annotation for POT
   python path_to_hdn/toolkit/benchmarks/POT/generate_json_for_POT.py --dataset POT280 

UCSB & POIC

Download from here put them in your cfg.BASE.DATA_PATH

ln -s path_to_data/UCSB  path_to_hdn/testing_dataset/UCSB #link to testing_datasets

generate json:

  python path_to_hdn/toolkit/benchmarks/POIC/generate_json_for_poic.py #generate json annotation for POT
  python path_to_hdn/toolkit/benchmarks/UCSB/generate_json_for_ucsb.py #generate json annotation for POT

Other datsets:

Download datasets and put them into testing_dataset directory. Jsons of commonly used datasets can be downloaded from here. If you want to test tracker on new dataset, please refer to pysot-toolkit to setting testing_dataset.

Test tracker

  • test POT
cd experiments/tracker_homo_config
python -u ../../tools/test.py \
	--snapshot ../../model/hdn-simi-sup-hm-unsup.pth \ # model path 
	--dataset POT210 \ # dataset name
	--config proj_e2e_GOT_unconstrained_v2.yaml # config file
	--vis   #display video

The testing results will in the current directory(./results/dataset/model_name/)

Eval tracker

For POT evaluation

1.use tools/change_pot_results_name.py to convert result_name(you need to set the path in the file).

2.use tools/convert2Homography.py to generate the homo file(you need to set the corresponding path in the file).

3.use POT toolkit to test the results. My version toolkit can be found here or official for other trackers:

For others:

For POIC, UCSB or POT evaluation on centroid precision, success rate, and robustness etc. assuming still in experiments/tracker_homo_config

python ../../tools/eval.py 	 \
	--tracker_path ./results \ # result path
	--dataset POIC        \ # dataset name
	--num 1 		 \ # number thread to eval
	--tracker_prefix 'model'   # tracker_name

The raw results can be downloaded at Google Drive or Baidu Netdisk (key:d98h)

Training 🔧

We use the COCO14 and GOT10K as our traning datasets. See TRAIN.md for detailed instruction.

Acknowledgement

This work is supported by the National Natural Science Foundation of China under Grants (61831015 and 62102152) and sponsored by CAAI-Huawei MindSpore Open Fund.

Our codes is based on SiamBAN and DeepHomography.

License

This project is released under the Apache 2.0 license.

Owner
CaptainHook
CaptainHook
SuRE Evaluation: A Supplementary Material

SuRE Evaluation: A Supplementary Material This repository contains supplementary material regarding the evaluations presented in the paper Visual Expl

NYU Visualization Lab 0 Dec 14, 2021
The official implementation of EIGNN: Efficient Infinite-Depth Graph Neural Networks (NeurIPS 2021)

EIGNN: Efficient Infinite-Depth Graph Neural Networks The official implementation of EIGNN: Efficient Infinite-Depth Graph Neural Networks (NeurIPS 20

Juncheng Liu 14 Nov 22, 2022
The Adapter-Bot: All-In-One Controllable Conversational Model

The Adapter-Bot: All-In-One Controllable Conversational Model This is the implementation of the paper: The Adapter-Bot: All-In-One Controllable Conver

CAiRE 37 Nov 04, 2022
In Search of Probeable Generalization Measures

In Search of Probeable Generalization Measures Exciting News! In Search of Probeable Generalization Measures has been accepted to the International Co

Mahdi S. Hosseini 6 Sep 11, 2022
Reimplementation of Learning Mesh-based Simulation With Graph Networks

Pytorch Implementation of Learning Mesh-based Simulation With Graph Networks This is the unofficial implementation of the approach described in the pa

Jingwei Xu 33 Dec 14, 2022
Towards Representation Learning for Atmospheric Dynamics (AtmoDist)

Towards Representation Learning for Atmospheric Dynamics (AtmoDist) The prediction of future climate scenarios under anthropogenic forcing is critical

Sebastian Hoffmann 4 Dec 15, 2022
Code for EMNLP 2021 paper Contrastive Out-of-Distribution Detection for Pretrained Transformers.

Contra-OOD Code for EMNLP 2021 paper Contrastive Out-of-Distribution Detection for Pretrained Transformers. Requirements PyTorch Transformers datasets

Wenxuan Zhou 27 Oct 28, 2022
[ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable

Unlearnable Examples Code for ICLR2021 Spotlight Paper "Unlearnable Examples: Making Personal Data Unexploitable " by Hanxun Huang, Xingjun Ma, Sarah

Hanxun Huang 98 Dec 07, 2022
[ICRA 2022] CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation

This is the official implementation of our paper: Bowen Wen, Wenzhao Lian, Kostas Bekris, and Stefan Schaal. "CaTGrasp: Learning Category-Level Task-R

Bowen Wen 199 Jan 04, 2023
DualGAN-tensorflow: tensorflow implementation of DualGAN

ICCV paper of DualGAN DualGAN: unsupervised dual learning for image-to-image translation please cite the paper, if the codes has been used for your re

Jack Yi 252 Nov 10, 2022
Bringing Computer Vision and Flutter together , to build an awesome app !!

Bringing Computer Vision and Flutter together , to build an awesome app !! Explore the Directories Flutter · Machine Learning Table of Contents About

Padmanabha Banerjee 14 Apr 07, 2022
Attention-guided gan for synthesizing IR images

SI-AGAN Attention-guided gan for synthesizing IR images This repository contains the Tensorflow code for "Pedestrian Gender Recognition by Style Trans

1 Oct 25, 2021
ULMFiT for Genomic Sequence Data

Genomic ULMFiT This is an implementation of ULMFiT for genomics classification using Pytorch and Fastai. The model architecture used is based on the A

Karl 276 Dec 12, 2022
Code for ICDM2020 full paper: "Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning"

Subg-Con Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning (Jiao et al., ICDM 2020): https://arxiv.org/abs/2009.10273 Over

34 Jul 06, 2022
[NeurIPS'20] Self-supervised Co-Training for Video Representation Learning. Tengda Han, Weidi Xie, Andrew Zisserman.

CoCLR: Self-supervised Co-Training for Video Representation Learning This repository contains the implementation of: InfoNCE (MoCo on videos) UberNCE

Tengda Han 271 Jan 02, 2023
A PyTorch Library for Accelerating 3D Deep Learning Research

Kaolin: A Pytorch Library for Accelerating 3D Deep Learning Research Overview NVIDIA Kaolin library provides a PyTorch API for working with a variety

NVIDIA GameWorks 3.5k Jan 07, 2023
For storing the complete exploration of Visual Question Answering for our B.Tech Project

Multi-Image vqa @authors: Akhilesh, Janhavi, Harsh Paper summary, Ideas tried and their corresponding results: on wiki Other discussions: on discussio

Harsh Raj 3 Jun 16, 2022
Human Detection - Pedestrian Detection using OpenCV Python

Pedestrian Detection using OpenCV Python Follow us on Instagram for Machine Lear

Hrishikesh Dutta 1 Jan 23, 2022
PyTorch implementation of U-TAE and PaPs for satellite image time series panoptic segmentation.

Panoptic Segmentation of Satellite Image Time Series with Convolutional Temporal Attention Networks (ICCV 2021) This repository is the official implem

71 Jan 04, 2023
Benchmarking Pipeline for Prediction of Protein-Protein Interactions

B4PPI Benchmarking Pipeline for the Prediction of Protein-Protein Interactions How this benchmarking pipeline has been built, and how to use it, is de

Loïc Lannelongue 4 Jun 27, 2022