An Implementation of SiameseRPN with Feature Pyramid Networks

Overview

SiameseRPN with FPN

This project is mainly based on HelloRicky123/Siamese-RPN. What I've done is just add a Feature Pyramid Network method to the original AlexNet structures.

For more details about siameseRPN please refer to the paper : High Performance Visual Tracking with Siamese Region Proposal Network by Bo Li, Junjie Yan,Wei Wu, Zheng Zhu, Xiaolin Hu.

For more details about Feature Pyramid Network please refer to the paper: Feature Pyramid Network for Object Detection by Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie.

Networks

  • Siamese Region Proposal Networks

    image-20210909160951628

  • Feature Pyramid Networks

    image-20210909161336484

  • SimaeseRPN+FPN

    • Template Branch

      0001

    • Detection Branch

      0001

Results

This project can get 0.618 AUC on OTB100, which also achieves overall 1.3% progress than the performance of baseline Siamese-RPN. Additionally, based on the ablation study results, it also shows that it can achieve robust performance different operating systems and GPUs.

Data preparation

I only use pre-trained models to finish my experiments,so here I would post the testing dataset OTB100 I get from http://cvlab.hanyang.ac.kr/tracker_benchmark/

If you don't want to download through the website above, you can just download: https://pan.baidu.com/s/1vWIn8ovCGKmlgIdHdt_MkA key: p8u4

For more details about OTB100 please refer to the paper: Object Tracking Benchmark by Yi Wu, Jongwoo Lim, Ming-Hsuan Yang.

Train phase

I didn't do any training but I still keep the baseline training method in my project. So if you have VID dataset or youtube-bb dataset, I would just post the steps of training here

Create dataset:

python bin/create_dataset_ytbid.py --vid-dir /PATH/TO/ILSVRC2015 --ytb-dir /PATH/TO/YT-BB --output-dir /PATH/TO/SAVE_DATA --num_threads 6

Create lmdb:

python bin/create_lmdb.py --data-dir /PATH/TO/SAVE_DATA --output-dir /PATH/TO/RESULT.lmdb --num_threads 12

Train:

python bin/train_siamrpn.py --data_dir /PATH/TO/SAVE_DATA

Test phase

If want to test the tracker, please first change the project path:

sys.path.append('[your_project_path]')

And then choose the combinations of different layers I putted in the net/network.py

then input your model path and dataset path to run:

python bin/test_OTB.py -ms [your_model_path] -v tb100 -d [your_dataset_path]

Environment

I've exported my anaconda and pip environment into /env/conda_env.yaml and /env/pip_requirements.txt

if you want to use it, just run the command below accordingly

for anaconda:

conda create -n [your_env_name] -f conda_env.yaml

for pip:

pip install -r requirements.txt

Model Download

Model which the baseline uses: https://pan.baidu.com/s/1vSvTqxaFwgmZdS00U3YIzQ keyword: v91k

Model after training 50 epoch: https://pan.baidu.com/s/1m9ISra0B04jcmjW1n73fxg keyword: 0s03

Experimental Environment

(1)

DELL-Precision-7530

OS: Ubuntu 18.04 LTS CPU: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz

Memory: 2*8G DDR4 2666MHZ

GPU: Nvidia Quadro P1000

(2)

HP OMEN

OS: Windows 10 Home Edition

CPU: Intel(R) Core(TM) i7-9750H CPU @ 2.6GHz

Memory: 2*8G DDR4 2666MHZ

GPU: Nvidia Geforce RTX2060

Optimization

On Ubuntu and Quadro P1000

  • AUCs with model siamrpn_38.pth
Layers Results(AUC)
baseline 0.610
2+5 0.618
2+3+5 0.607
2+3+4+5 0.611
  • AUCs with model siamrpn_50.pth
Layers Results(AUC)
baseline 0.600
2+5 0.605
2+3+5 0.594
2+3+4+5 0.605

On Windows 10 and Nvidia Geforce RTX2060

  • AUCs with model siamrpn_38.pth
layers Results(AUC)
baseline 0.610
2+5 0.617
2+3+5 0.607
2+3+4+5 0.612
  • AUCs with model siamrpn_50.pth
Layers Results(AUC)
baseline 0.597
2+5 0.606
2+3+5 0.597
2+3+4+5 0.605

Reference

[1] B. Li, J. Yan, W. Wu, Z. Zhu, X. Hu, High Performance Visual Tracking with Siamese Region Proposal Network, inProceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pages 8971-8980.

[2] T. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, S. Belongie, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pages 2117-2125.

[3] Y. Wu, J. Lim, M. Yang, "Object Tracking Benchmark", in IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, pages 1834-1848.

This is the official pytorch implementation for the paper: Instance Similarity Learning for Unsupervised Feature Representation.

ISL This is the official pytorch implementation for the paper: Instance Similarity Learning for Unsupervised Feature Representation, which is accepted

19 May 04, 2022
PyTorch implementation of Super SloMo by Jiang et al.

Super-SloMo PyTorch implementation of "Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation" by Jiang H., Sun

Avinash Paliwal 2.9k Jan 03, 2023
ALFRED - A Benchmark for Interpreting Grounded Instructions for Everyday Tasks

ALFRED A Benchmark for Interpreting Grounded Instructions for Everyday Tasks Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han,

ALFRED 204 Dec 15, 2022
v objective diffusion inference code for JAX.

v-diffusion-jax v objective diffusion inference code for JAX, by Katherine Crowson (@RiversHaveWings) and Chainbreakers AI (@jd_pressman). The models

Katherine Crowson 186 Dec 21, 2022
JAX-based neural network library

Haiku: Sonnet for JAX Overview | Why Haiku? | Quickstart | Installation | Examples | User manual | Documentation | Citing Haiku What is Haiku? Haiku i

DeepMind 2.3k Jan 04, 2023
Deformable DETR is an efficient and fast-converging end-to-end object detector.

Deformable DETR: Deformable Transformers for End-to-End Object Detection.

2k Jan 05, 2023
A Python library that enables ML teams to share, load, and transform data in a collaborative, flexible, and efficient way :chestnut:

Squirrel Core Share, load, and transform data in a collaborative, flexible, and efficient way What is Squirrel? Squirrel is a Python library that enab

Merantix Momentum 249 Dec 07, 2022
City-Scale Multi-Camera Vehicle Tracking Guided by Crossroad Zones Code

City-Scale Multi-Camera Vehicle Tracking Guided by Crossroad Zones Requirements Python 3.8 or later with all requirements.txt dependencies installed,

88 Dec 12, 2022
Implementation of Uformer, Attention-based Unet, in Pytorch

Uformer - Pytorch Implementation of Uformer, Attention-based Unet, in Pytorch. It will only offer the concat-cross-skip connection. This repository wi

Phil Wang 72 Dec 19, 2022
Coarse implement of the paper "A Simultaneous Denoising and Dereverberation Framework with Target Decoupling", On DNS-2020 dataset, the DNSMOS of first stage is 3.42 and second stage is 3.47.

SDDNet Coarse implement of the paper "A Simultaneous Denoising and Dereverberation Framework with Target Decoupling", On DNS-2020 dataset, the DNSMOS

Cyril Lv 43 Nov 21, 2022
Official implementation for the paper "Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D Object Detection"

Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D Object Detection PyTorch code release of the paper "Attentive Prototypes for Sour

Deepti Hegde 23 Oct 17, 2022
Open-source code for Generic Grouping Network (GGN, CVPR 2022)

Open-World Instance Segmentation: Exploiting Pseudo Ground Truth From Learned Pairwise Affinity Pytorch implementation for "Open-World Instance Segmen

Meta Research 99 Dec 06, 2022
Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context Code in both PyTorch and TensorFlow

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context This repository contains the code in both PyTorch and TensorFlow for our paper

Zhilin Yang 3.3k Jan 06, 2023
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt. This is done by

Mehdi Cherti 135 Dec 30, 2022
Starter Code for VALUE benchmark

StarterCode for VALUE Benchmark This is the starter code for VALUE Benchmark [website], [paper]. This repository currently supports all baseline model

VALUE Benchmark 73 Dec 09, 2022
Seq2seq - Sequence to Sequence Learning with Keras

Seq2seq Sequence to Sequence Learning with Keras Hi! You have just found Seq2Seq. Seq2Seq is a sequence to sequence learning add-on for the python dee

Fariz Rahman 3.1k Dec 18, 2022
TipToiDog - Tip Toi Dog With Python

TipToiDog Was ist dieses Projekt? Meine 5-jährige Tochter spielt sehr gerne das

1 Feb 07, 2022
Contrastively Disentangled Sequential Variational Audoencoder

Contrastively Disentangled Sequential Variational Audoencoder (C-DSVAE) Overview This is the implementation for our C-DSVAE, a novel self-supervised d

Junwen Bai 35 Dec 24, 2022
A Pytorch implementation of "Splitter: Learning Node Representations that Capture Multiple Social Contexts" (WWW 2019).

Splitter ⠀⠀ A PyTorch implementation of Splitter: Learning Node Representations that Capture Multiple Social Contexts (WWW 2019). Abstract Recent inte

Benedek Rozemberczki 201 Nov 09, 2022
WSDM2022 "A Simple but Effective Bidirectional Extraction Framework for Relational Triple Extraction"

BiRTE WSDM2022 "A Simple but Effective Bidirectional Extraction Framework for Relational Triple Extraction" Requirements The main requirements are: py

9 Dec 27, 2022