This repo is customed for VisDrone.

Overview

Object Detection for VisDrone(无人机航拍图像目标检测)

My environment

1、Windows10 (Linux available)
2、tensorflow >= 1.12.0
3、python3.6 (anaconda)
4、cv2
5、ensemble-boxes(pip install ensemble-boxes)

Datasets(XML format for training set)

(1).Datasets is available on https://github.com/VisDrone/VisDrone-Dataset
(2).Please download xml annotations on Baidu Yun (提取码: ia3f), or Google Drive, and configure it in ./core/config/cfgs.py
(3).You can also use ./data/visdrone2xml.py to generate your visdrone xml files, modify the path information.

training-set format:

├── VisDrone2019-DET-train
│     ├── Annotation(xml format)
│     ├── JPEGImages

Pretrained Models(ResNet50vd, 101vd)

Please download pretrained models on Baidu Yun (提取码: krce), or Google Drive, then put it into ./data/pretrained_weights

Train

Modify the parameters in ./core/config/cfgs.py
python train_step.py

Eval

Modify the parameters in ./core/config/cfgs.py
python eval_visdrone.py, it will get txt format file, then use official matlab tools to eval the final results.
python eval_model_ensemble.py. Before the running of this file, you should set NORMALIZED_RESULTS_FOR_MODEL_ENSEMBLE=True in cfgs.py and then run eval_visdrone.py to get normalized txt result.

Visualization

Modify the parameters in ./core/config/cfgs.py
python image_demo.py, it will get visualized results.

Visualized Result (multi-scale training+multi-scale testing) 1

Test Result(Validation set):

1. ResNet50-vd

Name maxDets Result(s/m)
Average Precision (AP) @( IoU=0.50:0.95) maxDets=500 31.26%/35.1%
Average Precision (AP) @( IoU=0.50 ) maxDets=500 56.44%/60.29%
Average Precision (AP) @( IoU=0.75 ) maxDets=500 30.13%/35.42%
Average Recall (AR) @( IoU=0.50:0.95) maxDets= 1 0.78%/0.58%
Average Recall (AR) @( IoU=0.50:0.95) maxDets= 10 6.62%/6.05%
Average Recall (AR) @( IoU=0.50:0.95) maxDets=100 38.21%/40.99%
Average Recall (AR) @( IoU=0.50:0.95) maxDets=500 48.41%/53%
"s" means single-scale training + single-scale testing; "m"means multi-scale training + multi-scale testing

2. ResNet101-vd

Name maxDets Result(s/m)
Average Precision (AP) @( IoU=0.50:0.95) maxDets=500 31.7%/35.98%
Average Precision (AP) @( IoU=0.50 ) maxDets=500 56.94%/61.64%
Average Precision (AP) @( IoU=0.75 ) maxDets=500 30.59%/36.13%
Average Recall (AR) @( IoU=0.50:0.95) maxDets= 1 0.67%/0.61%
Average Recall (AR) @( IoU=0.50:0.95) maxDets= 10 6.29%/6.13%
Average Recall (AR) @( IoU=0.50:0.95) maxDets=100 38.66%/42.33%
Average Recall (AR) @( IoU=0.50:0.95) maxDets=500 49.29%/53.68%

3. Model Ensemble (ResNet101-vd+ResNet50-vd)

Name maxDets Result
Average Precision (AP) @( IoU=0.50:0.95) maxDets=500 36.76%
Average Precision (AP) @( IoU=0.50 ) maxDets=500 62.33%
Average Precision (AP) @( IoU=0.75 ) maxDets=500 37.41%
Average Recall (AR) @( IoU=0.50:0.95) maxDets= 1 0.59%
Average Recall (AR) @( IoU=0.50:0.95) maxDets= 10 6.06%
Average Recall (AR) @( IoU=0.50:0.95) maxDets=100 42.57%
Average Recall (AR) @( IoU=0.50:0.95) maxDets=500 54.53%
You can download trained weights(ResNet50vd, 101vd) on Baidu Yun (提取码: 9u9m), or Google Drive, then put it into ./saved_weights

Reference

1、https://github.com/DetectionTeamUCAS/Faster-RCNN_Tensorflow
2、https://github.com/open-mmlab/mmdetection
3、https://github.com/ZFTurbo/Weighted-Boxes-Fusion
4、https://github.com/kobiso/CBAM-tensorflow-slim
5、https://github.com/SJTU-Thinklab-Det/DOTA-DOAI
6、https://github.com/Viredery/tf-eager-fasterrcnn
7、https://github.com/VisDrone/VisDrone2018-DET-toolkit
8、https://github.com/YunYang1994/tensorflow-yolov3
9、https://github.com/zhpmatrix/VisDrone2018

SalFBNet: Learning Pseudo-Saliency Distribution via Feedback Convolutional Networks

SalFBNet This repository includes Pytorch implementation for the following paper: SalFBNet: Learning Pseudo-Saliency Distribution via Feedback Convolu

12 Aug 12, 2022
ICLR 2021: Pre-Training for Context Representation in Conversational Semantic Parsing

SCoRe: Pre-Training for Context Representation in Conversational Semantic Parsing This repository contains code for the ICLR 2021 paper "SCoRE: Pre-Tr

Microsoft 28 Oct 02, 2022
Code for Universal Semi-Supervised Semantic Segmentation models paper accepted in ICCV 2019

USSS_ICCV19 Code for Universal Semi Supervised Semantic Segmentation accepted to ICCV 2019. Full Paper available at https://arxiv.org/abs/1811.10323.

Tarun K 68 Nov 24, 2022
The official code of "SCROLLS: Standardized CompaRison Over Long Language Sequences".

SCROLLS This repository contains the official code of the paper: "SCROLLS: Standardized CompaRison Over Long Language Sequences". Links Official Websi

TAU NLP Group 39 Dec 23, 2022
The open source code of SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation.

SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation(ICPR 2020) Overview This code is for the paper: Spatial Attention U-Net for Retinal V

Changlu Guo 151 Dec 28, 2022
Implement some metaheuristics and cost functions

Metaheuristics This repot implement some metaheuristics and cost functions. Metaheuristics JAYA Implement Jaya optimizer without constraints. Cost fun

Adri1G 1 Mar 23, 2022
[ICCV2021] 3DVG-Transformer: Relation Modeling for Visual Grounding on Point Clouds

3DVG-Transformer This repository is for the ICCV 2021 paper "3DVG-Transformer: Relation Modeling for Visual Grounding on Point Clouds" Our method "3DV

22 Dec 11, 2022
AFLNet: A Greybox Fuzzer for Network Protocols

AFLNet: A Greybox Fuzzer for Network Protocols AFLNet is a greybox fuzzer for protocol implementations. Unlike existing protocol fuzzers, it takes a m

626 Jan 06, 2023
Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones

HaloNet - Pytorch Implementation of the Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones. This re

Phil Wang 189 Nov 22, 2022
PyTorch implementation of DeepUME: Learning the Universal Manifold Embedding for Robust Point Cloud Registration (BMVC 2021)

DeepUME: Learning the Universal Manifold Embedding for Robust Point Cloud Registration [video] [paper] [supplementary] [data] [thesis] Introduction De

Natalie Lang 10 Dec 14, 2022
Resources related to EMNLP 2021 paper "FAME: Feature-Based Adversarial Meta-Embeddings for Robust Input Representations"

FAME: Feature-based Adversarial Meta-Embeddings This is the companion code for the experiments reported in the paper "FAME: Feature-Based Adversarial

Bosch Research 11 Nov 27, 2022
The implementation for paper Joint t-SNE for Comparable Projections of Multiple High-Dimensional Datasets.

Joint t-sne This is the implementation for paper Joint t-SNE for Comparable Projections of Multiple High-Dimensional Datasets. abstract: We present Jo

IDEAS Lab 7 Dec 18, 2022
InvTorch: memory-efficient models with invertible functions

InvTorch: Memory-Efficient Invertible Functions This module extends the functionality of torch.utils.checkpoint.checkpoint to work with invertible fun

Modar M. Alfadly 12 May 12, 2022
Implementation of the CVPR 2021 paper "Online Multiple Object Tracking with Cross-Task Synergy"

Online Multiple Object Tracking with Cross-Task Synergy This repository is the implementation of the CVPR 2021 paper "Online Multiple Object Tracking

54 Oct 15, 2022
Code for the paper One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation, CVPR 2021.

One Thing One Click One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation (CVPR2021) Code for the paper One Thi

44 Dec 12, 2022
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 05, 2022
Continuous Augmented Positional Embeddings (CAPE) implementation for PyTorch

PyTorch implementation of Continuous Augmented Positional Embeddings (CAPE), by Likhomanenko et al. Enhance your Transformer positional embeddings with easy-to-use augmentations!

Guillermo Cámbara 26 Dec 13, 2022
Pipeline code for Sequential-GAM(Genome Architecture Mapping).

Sequential-GAM Pipeline code for Sequential-GAM(Genome Architecture Mapping). mapping whole_preprocess.sh include the whole processing of mapping. usa

3 Nov 03, 2022
Supervised Contrastive Learning for Downstream Optimized Sequence Representations

SupCL-Seq 📖 Supervised Contrastive Learning for Downstream Optimized Sequence representations (SupCS-Seq) accepted to be published in EMNLP 2021, ext

Hooman Sedghamiz 18 Oct 21, 2022
Linear image-to-image translation

Linear (Un)supervised Image-to-Image Translation Examples for linear orthogonal transformations in PCA domain, learned without pairing supervision. Tr

Eitan Richardson 40 Aug 31, 2022