Official code of "Mitigating the Mutual Error Amplification for Semi-Supervised Object Detection"

Overview

CrossTeaching-SSOD

0. Introduction

Official code of "Mitigating the Mutual Error Amplification for Semi-Supervised Object Detection"

This repo includes training SSD300 and training Faster-RCNN-FPN on the Pascal VOC benchmark. The scripts about training SSD300 are based on ssd.pytorch (https://github.com/amdegroot/ssd.pytorch/). The scripts about training Faster-RCNN-FPN are based on the official Detectron2 repo (https://github.com/facebookresearch/detectron2/).

1. Environment

Python = 3.6.8

CUDA Version = 10.1

Pytorch Version = 1.6.0

detectron2 (for Faster-RCNN-FPN)

2. Prepare Dataset

Download and extract the Pascal VOC dataset.

For SSD300, specify the VOC_ROOT variable in data/voc0712.py and data/voc07_consistency.py as /home/username/dataset/VOCdevkit/

For Faster-RCNN-FPN, set the environmental variable in this way: export DETECTRON2_DATASETS=/home/username/dataset/VOCdevkit/

3. Instruction

3.1 Reproduce Table.1

Go into the SSD300 directory, then run the following scripts.

supervised training (VOC 07 labeled, without extra augmentation):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_ssd.py --save_interval 12000

self-labeling (VOC 07 labeled + VOC 12 unlabeled, without extra augmentation):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_pseudo39.py --resume weights/ssd300_12000.pth --ramp --save_interval 12000

supervised training (VOC 0712 labeled, without extra augmentation):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_ssd0712.py --save_interval 12000

supervised training (VOC 07 labeled, with horizontal flip):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_csd_sup2.py --save_interval 12000

self-labeling (VOC 07 labeled + VOC 12 unlabeled, with horizontal flip):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_csd.py --save_interval 12000

supervised training (VOC 0712 labeled, with horizontal flip):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_csd_sup_0712.py --save_interval 12000

supervised training (VOC 07 labeled, with mix-up augmentation):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_isd_sup2.py --save_interval 12000

self-labeling (VOC 07 labeled + VOC 12 unlabeled, with mix-up augmentation):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_only_isd.py --save_interval 12000

supervised training (VOC 0712 labeled, with mix-up augmentation):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_isd_sup_0712.py --save_interval 12000

3.2 Reproduce Table.2

Go into the SSD300 directory, then run the following scripts.

supervised training (VOC 07 labeled, without augmentation):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_ssd.py --save_interval 12000

self-labeling (VOC 07 labeled + VOC 12 unlabeled, confidence threshold=0.5):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_pseudo39.py --resume weights/ssd300_12000.pth --ramp --save_interval 12000

self-labeling (VOC 07 labeled + VOC 12 unlabeled, confidence threshold=0.8):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_pseudo39-0.8.py --resume weights/ssd300_12000.pth --ramp --save_interval 12000

self-labeling (random FP label, confidence threshold=0.5):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_pseudo102.py --resume weights/ssd300_12000.pth --ramp --save_interval 12000

self-labeling (use only TP, confidence threshold=0.5):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_pseudo36.py --resume weights/ssd300_12000.pth --ramp --save_interval 12000

self-labeling (use only TP, confidence threshold=0.8):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_pseudo36-0.8.py --resume weights/ssd300_12000.pth --ramp --save_interval 12000

self-labeling (use true label, confidence threshold=0.5):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_pseudo32.py --resume weights/ssd300_12000.pth --ramp --save_interval 12000

Go into the detectron2 directory.

supervised training (VOC 07 labeled, go into VOC07-sup-bs16):

python3 train_net.py --num-gpus 8 --config configs/voc/voc07_voc12.yaml

self-labeling (VOC 07 labeled + VOC 12 unlabeled, go into VOC07-sup-VOC12-unsup-self-teaching-0.7):

python3 train_net.py --resume --num-gpus 8 --config configs/voc/voc07_voc12.yaml MODEL.WEIGHTS output/model_0005999.pth SOLVER.CHECKPOINT_PERIOD 18000

self-labeling (random FP label, go into VOC07-sup-VOC12-unsup-self-teaching-0.7-random-wrong):

python3 train_net.py --resume --num-gpus 8 --config configs/voc/voc07_voc12.yaml MODEL.WEIGHTS output/model_0005999.pth SOLVER.CHECKPOINT_PERIOD 18000

self-labeling (use true label, go into VOC07-sup-VOC12-unsup-self-teaching-0.7-only-correct):

python3 train_net.py --resume --num-gpus 8 --config configs/voc/voc07_voc12.yaml MODEL.WEIGHTS output/model_0005999.pth SOLVER.CHECKPOINT_PERIOD 18000

3.3 Reproduce Table.3

Go into the SSD300 directory, then run the following scripts.

cross teaching

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_pseudo137.py --resume weights/ssd300_12000.pth --resume2 weights/default/ssd300_12000.2.pth --save_interval 12000 --ramp --ema_rate 0.99 --ema_step 10

cross teaching + mix-up augmentation

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_pseudo151.py --resume weights/ssd300_12000.pth --resume2 weights/default/ssd300_12000.2.pth --save_interval 12000 --ramp --ema_rate 0.99 --ema_step 10

Go into the detectron2/VOC07-sup-VOC12-unsup-cross-teaching directory.

cross teaching

python3 train_net.py --resume --num-gpus 8 --config configs/voc/voc07_voc12.yaml MODEL.WEIGHTS output/model_0005999.pth SOLVER.CHECKPOINT_PERIOD 18000

Owner
Bruno Ma
Phd candidate in NLPR in CASIA
Bruno Ma
Finite-temperature variational Monte Carlo calculation of uniform electron gas using neural canonical transformation.

CoulombGas This code implements the neural canonical transformation approach to the thermodynamic properties of uniform electron gas. Building on JAX,

FermiFlow 9 Mar 03, 2022
Official PyTorch implementation of "Improving Face Recognition with Large AgeGaps by Learning to Distinguish Children" (BMVC 2021)

Inter-Prototype (BMVC 2021): Official Project Webpage This repository provides the official PyTorch implementation of the following paper: Improving F

Jungsoo Lee 16 Jun 30, 2022
End-to-End Speech Processing Toolkit

ESPnet: end-to-end speech processing toolkit system/pytorch ver. 1.3.1 1.4.0 1.5.1 1.6.0 1.7.1 1.8.1 1.9.0 ubuntu20/python3.9/pip ubuntu20/python3.8/p

ESPnet 5.9k Jan 04, 2023
darija <-> english dictionary

darija-dictionary Having advanced IT solutions that are well adapted to the Moroccan context passes inevitably through understanding Moroccan dialect.

DODa 102 Jan 01, 2023
LETR: Line Segment Detection Using Transformers without Edges

LETR: Line Segment Detection Using Transformers without Edges Introduction This repository contains the official code and pretrained models for Line S

mlpc-ucsd 157 Jan 06, 2023
Official Implementation for Fast Training of Neural Lumigraph Representations using Meta Learning.

Fast Training of Neural Lumigraph Representations using Meta Learning Project Page | Paper | Data Alexander W. Bergman, Petr Kellnhofer, Gordon Wetzst

Alex 39 Oct 08, 2022
On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks

On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks We provide the code (in PyTorch) and datasets for our paper "On Size-Orient

Zemin Liu 4 Jun 18, 2022
Exe-to-xlsm - Simple script to create VBscript of exe and inject to xlsm

๐ŸŽ Exe To Office Executable file injection to Office documents: .xlsm, .docm, .p

3 Jan 25, 2022
MVGCN: a novel multi-view graph convolutional network (MVGCN) framework for link prediction in biomedical bipartite networks.

MVGCN MVGCN: a novel multi-view graph convolutional network (MVGCN) framework for link prediction in biomedical bipartite networks. Developer: Fu Hait

13 Dec 01, 2022
Supervised Contrastive Learning for Downstream Optimized Sequence Representations

SupCL-Seq ๐Ÿ“– Supervised Contrastive Learning for Downstream Optimized Sequence representations (SupCS-Seq) accepted to be published in EMNLP 2021, ext

Hooman Sedghamiz 18 Oct 21, 2022
Code for "Single-view robot pose and joint angle estimation via render & compare", CVPR 2021 (Oral).

Single-view robot pose and joint angle estimation via render & compare Yann Labbรฉ, Justin Carpentier, Mathieu Aubry, Josef Sivic CVPR: Conference on C

Yann Labbรฉ 51 Oct 14, 2022
Organseg dags - The repository contains the codebase for multi-organ segmentation with directed acyclic graphs (DAGs) in CT.

Organseg dags - The repository contains the codebase for multi-organ segmentation with directed acyclic graphs (DAGs) in CT.

yzf 1 Jun 12, 2022
Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch

CoCa - Pytorch Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch. They were able to elegantly fit in contras

Phil Wang 565 Dec 30, 2022
Waymo motion prediction challenge 2021: 3rd place solution

Waymo motion prediction challenge 2021: 3rd place solution ๐Ÿ“œ Technical report ๐Ÿ—จ๏ธ Presentation ๐ŸŽ‰ Announcement ๐Ÿ›†Motion Prediction Channel Website ๐Ÿ›†

158 Jan 08, 2023
[EMNLP 2020] Keep CALM and Explore: Language Models for Action Generation in Text-based Games

Contextual Action Language Model (CALM) and the ClubFloyd Dataset Code and data for paper Keep CALM and Explore: Language Models for Action Generation

Princeton Natural Language Processing 43 Dec 16, 2022
A pre-trained model with multi-exit transformer architecture.

ElasticBERT This repository contains finetuning code and checkpoints for ElasticBERT. Towards Efficient NLP: A Standard Evaluation and A Strong Baseli

fastNLP 48 Dec 14, 2022
A collection of resources, problems, explanations and concepts that are/were important during my Data Science journey

Data Science Gurukul List of resources, interview questions, concepts I use for my Data Science work. Topics: Basics of Programming with Python + Unde

Smaranjit Ghose 10 Oct 25, 2022
The implementation of the lifelong infinite mixture model

Lifelong infinite mixture model ๐Ÿ“‹ This is the implementation of the Lifelong infinite mixture model ๐Ÿ“‹ Accepted by ICCV 2021 Title : Lifelong Infinit

Fei Ye 5 Oct 20, 2022
Official implementation of MSR-GCN (ICCV 2021 paper)

MSR-GCN Official implementation of MSR-GCN: Multi-Scale Residual Graph Convolution Networks for Human Motion Prediction (ICCV 2021 paper) [Paper] [Sup

LevonDang 42 Nov 07, 2022
The code for our paper CrossFormer: A Versatile Vision Transformer Based on Cross-scale Attention.

CrossFormer This repository is the code for our paper CrossFormer: A Versatile Vision Transformer Based on Cross-scale Attention. Introduction Existin

cheerss 238 Jan 06, 2023