[email protected]) Boosting Co-teaching with Compression Regularization for Label Noise | PythonRepo" /> [email protected]) Boosting Co-teaching with Compression Regularization for Label Noise | PythonRepo">

([email protected]) Boosting Co-teaching with Compression Regularization for Label Noise

Overview

Nested-Co-teaching

([email protected]) Pytorch implementation of paper "Boosting Co-teaching with Compression Regularization for Label Noise"

[PDF]

If our project is helpful for your research, please consider citing :

@inproceedings{chen2021boosting, 
	  title={Boosting Co-teaching with Compression Regularization for Label Noise}, 
	  author={Chen, Yingyi and Shen, Xi and Hu, Shell Xu and Suykens, Johan AK}, 
	  booktitle={CVPR Learning from Limited and Imperfect Data (L2ID) workshop}, 
	  year={2021} 
	}

Our model can be learnt in a single GPU GeForce GTX 1080Ti (12G), this code has been tested with Pytorch 1.7.1

Table of Content

1. Toy Results

The nested regularization allows us to learn ordered representation which would be useful to combat noisy label. In this toy example, we aim at learning a projection from X to Y with noisy pairs. By adding nested regularization, the most informative recontruction is stored in the first few channels.

Baseline, same MLP Nested200, 1st channel
gif gif
Nested200,first 10 channels Nested200, first 100 channels
gif gif

2. Results on Clothing1M and Animal

Clothing1M [Xiao et al., 2015]

  • We provide average accuracy as well as the standard deviation for three runs (%) on the test set of Clothing1M [Xiao et al., 2015]. Results with “*“ are either using a balanced subset or a balanced loss.
Methods [email protected] result_ref/download
CE 67.2 [Wei et al., 2020]
F-correction [Patrini et al., 2017] 68.9 [Wei et al., 2020]
Decoupling [Malach and Shalev-Shwartz, 2017] 68.5 [Wei et al., 2020]
Co-teaching [Han et al., 2018] 69.2 [Wei et al., 2020]
Co-teaching+ [Yu et al., 2019] 59.3 [Wei et al., 2020]
JoCoR [Wei et al., 2020] 70.3 --
JO [Tanaka et al., 2018] 72.2 --
Dropout* [Srivastava et al., 2014] 72.8 --
PENCIL* [Yi and Wu, 2019] 73.5 --
MLNT [Li et al., 2019] 73.5 --
PLC* [Zhang et al., 2021] 74.0 --
DivideMix* [Li et al., 2020] 74.8 --
Nested* (Ours) 73.1 ± 0.3 model
Nested + Co-teaching* (Ours) 74.9 ± 0.2 model

ANIMAL-10N [Song et al., 2019]

  • We provide test set accuracy (%) on ANIMAL-10N [Song et al., 2019]. We report average accuracy as well as the standard deviation for three runs.
Methods [email protected] result_ref/download
CE 79.4 ± 0.1 [Song et al., 2019]
Dropout [Srivastava et al., 2014] 81.3 ± 0.3 --
SELFIE [Song et al., 2019] 81.8 ± 0.1 --
PLC [Zhang et al., 2021] 83.4 ± 0.4 --
Nested (Ours) 81.3 ± 0.6 model
Nested + Co-teaching (Ours) 84.1 ± 0.1 model

3. Datasets

Clothing1M

To download Clothing1M dataset [Xiao et al., 2015], please refer to here. Once it is downloaded, put it into ./data/. The structure of the file should be:

./data/Clothing1M
├── noisy_train
├── clean_val
└── clean_test

Generate two random Clothing1M noisy subsets for training after unzipping :

cd data/
# generate two random subsets for training
python3 clothing1M_rand_subset.py --name noisy_rand_subtrain1 --data-dir ./Clothing1M/ --seed 123

python3 clothing1M_rand_subset.py --name noisy_rand_subtrain2 --data-dir ./Clothing1M/ --seed 321

Please refer to data/gen_data.sh for more details.

ANIMAL-10N

To download ANIMAL-10N dataset [Song et al., 2019], please refer to here. It includes one training and one test set. Once it is downloaded, put it into ./data/. The structure of the file should be:

./data/Animal10N/
├── train
└── test

4. Train

4.1. Stage One : Training Nested Dropout Networks

We first train two Nested Dropout networks separately to provide reliable base networks for the subsequent stage. You can run the training of this stage by :

  • For training networks on Clothing1M (ResNet-18). You can also train baseline/dropout networks for comparisons. More details are provided in nested/run_clothing1m.sh.
cd nested/ 
# train one Nested network
python3 train_resnet.py --train-dir ../data/Clothing1M/noisy_rand_subtrain1/ --val-dir ../data/Clothing1M/clean_val/ --dataset Clothing1M --arch resnet18 --lrSchedule 5 --lr 0.02 --nbEpoch 30 --batchsize 448 --nested 100 --pretrained --freeze-bn --out-dir ./checkpoints/Cloth1M_nested100_lr2e-2_bs448_freezeBN_imgnet_model1 --gpu 0
  • For training networks on ANIMAL-10N (VGG-19+BN). You can also train baseline/dropout networks for comparisons. More details are provided in nested/run_animal10n.sh.
cd nested/ 
python3 train_vgg.py --train-dir ../data/Animal10N/train/ --val-dir ../data/Animal10N/test/ --dataset Animal10N --arch vgg19-bn --lr-gamma 0.2 --batchsize 128 --warmUpIter 6000 --nested1 100 --nested2 100 --alter-train --out-dir ./checkpoints_animal10n/Animal10N_alter_nested100_100_vgg19bn_lr0.1_warm6000_bs128_model1 --gpu 0

4.2. Stage Two : Fine-tuning with Co-teaching

In this stage, the two trained networks are further fine-tuned with Co-teaching. You can run the training of this stage by :

  • For fine-tuning with Co-teaching on Clothing1M (ResNet-18) :
cd co_teaching_resnet/ 
python3 main.py --train-dir ../data/Clothing1M/noisy_rand_subtrain1/ --val-dir ../data/Clothing1M/clean_val/ --dataset Clothing1M --lrSchedule 5 --nGradual 0 --lr 0.002 --nbEpoch 30 --warmUpIter 0 --batchsize 448 --freeze-bn --forgetRate 0.3 --out-dir ./finetune_ckpt/Cloth1M_nested100_lr2e-3_bs448_freezeBN_fgr0.3_pre_nested100_100 --resumePthList ../nested/checkpoints/Cloth1M_nested100_lr2e-2_bs448_imgnet_freezeBN_model1_Acc0.735_K12 ../nested/checkpoints/Cloth1M_nested100_lr2e-2_bs448_imgnet_freezeBN_model2_Acc0.733_K15 --nested 100 --gpu 0

The two Nested ResNet-18 networks trained in stage one can be downloaded here: ckpt1, ckpt2. We also provide commands for training Co-teaching from scratch for comparisons in co_teaching_resnet/run_clothing1m.sh.

  • For fine-tuning with Co-teaching on ANIMAL-10N (VGG-19+BN) :
cd co_teaching_vgg/ 
python3 main.py --train-dir ../data/Animal10N/train/ --val-dir ../data/Animal10N/test/ --dataset Animal10N --arch vgg19-bn --lrSchedule 5 --nGradual 0 --lr 0.004 --nbEpoch 30 --warmUpIter 0 --batchsize 128 --freeze-bn --forgetRate 0.2 --out-dir ./finetune_ckpt/Animal10N_alter_nested100_lr4e-3_bs128_freezeBN_fgr0.2_pre_nested100_100_nested100_100 --resumePthList ../nested/checkpoints_animal10n/new_code_nested/Animal10N_alter_nested100_100_vgg19bn_lr0.1_warm6000_bs128_model1_Acc0.803_K14 ../nested/checkpoints_animal10n/new_code_nested/Animal10N_alter_nested100_100_vgg19bn_lr0.1_warm6000_bs128_model2_Acc0.811_K14 --nested1 100 --nested2 100 --alter-train --gpu 0

The two Nested VGG-19+BN networks trained in stage one can be downloaded here: ckpt1, ckpt2. We also provide commands for training Co-teaching from scratch for comparisons in co_teaching_vgg/run_animal10n.sh.

5. Evaluation

To evaluate models' ability of combating with label noise, we compute classification accuracy on a provided clean test set.

5.1. Stage One : Nested Dropout Networks

Evaluation of networks derived from stage one are provided here :

cd nested/ 
# for networks on 
python3 test.py --test-dir ../data/Clothing1M/clean_test/ --dataset Clothing1M --arch resnet18 --resumePthList ./checkpoints/Cloth1M_nested100_lr2e-2_bs448_imgnet_freezeBN_model1_Acc0.735_K12 --KList 12 --gpu 0

More details can be found in nested/run_test.sh. Note that "_K12" in the model's name denotes the index of the optimal K, and the optimal number of channels for the model is actually 13 (nb of optimal channels = index of channel + 1).

5.2. Stage Two : Fine-tuning Co-teaching Networks

Evaluation of networks derived from stage two are provided as follows.

  • Networks trained on Clothing1M:
cd co_teaching_resnet/ 
python3 test.py --test-dir ../data/Clothing1M/clean_test/ --dataset Clothing1M --arch resnet18 --resumePthList ./finetune_ckpt/Cloth1M_nested100_lr2e-3_bs448_freezeBN_fgr0.3_pre_nested100_100_model2_Acc0.749_K24 --KList 24 --gpu 0

More details can be found in co_teaching_resnet/run_test.sh.

  • Networks trained on ANIMAL-10N:
cd co_teaching_vgg/ 
python3 test.py --test-dir ../data/Animal10N/test/ --dataset Animal10N --resumePthList ./finetune_ckpt/Animal10N_nested100_lr4e-3_bs128_freezeBN_fgr0.2_pre_nested100_100_nested100_100_model1_Acc0.842_K12 --KList 12 --gpu 0

More details can be found in co_teaching_vgg/run_test.sh.

A document scanner application for laptops/desktops developed using python, Tkinter and OpenCV.

DcoumentScanner A document scanner application for laptops/desktops developed using python, Tkinter and OpenCV. Directly install the .exe file to inst

Harsh Vardhan Singh 1 Oct 29, 2021
a deep learning model for page layout analysis / segmentation.

OCR Segmentation a deep learning model for page layout analysis / segmentation. dependencies tensorflow1.8 python3 dataset: uw3-framed-lines-degraded-

99 Dec 12, 2022
Layout Analysis Evaluator for the ICDAR 2017 competition on Layout Analysis for Challenging Medieval Manuscripts

LayoutAnalysisEvaluator Layout Analysis Evaluator for: ICDAR 2019 Historical Document Reading Challenge on Large Structured Chinese Family Records ICD

17 Dec 08, 2022
Just a script for detecting the lanes in any car game (not just gta 5) with specific resolution and road design ( very basic and limited )

GTA-5-Lane-detection Just a script for detecting the lanes in any car game (not just gta 5) with specific resolution and road design ( very basic and

Danciu Georgian 4 Aug 01, 2021
DouZero is a reinforcement learning framework for DouDizhu - 斗地主AI

[ICML 2021] DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning | 斗地主AI

Kwai 3.1k Jan 05, 2023
Converts an image into funny, smaller amongus characters

SussyImage Converts an image into funny, smaller amongus characters Demo Mona Lisa | Lona Misa (Made up of AmongUs characters) API I've also added an

Dhravya Shah 14 Aug 18, 2022
An Implementation of the alogrithm in paper IncepText: A New Inception-Text Module with Deformable PSROI Pooling for Multi-Oriented Scene Text Detection

InceptText-Tensorflow An Implementation of the alogrithm in paper IncepText: A New Inception-Text Module with Deformable PSROI Pooling for Multi-Orien

GeorgeJoe 115 Dec 12, 2022
A list of hyperspectral image super-solution resources collected by Junjun Jiang

A list of hyperspectral image super-resolution resources collected by Junjun Jiang. If you find that important resources are not included, please feel free to contact me.

Junjun Jiang 301 Jan 05, 2023
Corner-based Region Proposal Network

Corner-based Region Proposal Network CRPN is a two-stage detection framework for multi-oriented scene text. It employs corners to estimate the possibl

xhzdeng 140 Nov 04, 2022
ARU-Net - Deep Learning Chinese Word Segment

ARU-Net: A Neural Pixel Labeler for Layout Analysis of Historical Documents Contents Introduction Installation Demo Training Introduction This is the

128 Sep 12, 2022
Simple SDF mesh generation in Python

Generate 3D meshes based on SDFs (signed distance functions) with a dirt simple Python API.

Michael Fogleman 1.1k Jan 08, 2023
Code related to "Have Your Text and Use It Too! End-to-End Neural Data-to-Text Generation with Semantic Fidelity" paper

DataTuner You have just found the DataTuner. This repository provides tools for fine-tuning language models for a task. See LICENSE.txt for license de

81 Jan 01, 2023
Select range and every time the screen changes, OCR is activated.

ASOCR(Auto Screen OCR) Select range and every time you press Space key, OCR is activated. 範囲を選ぶと、あなたがスペースキーを押すたびに、画面が変わる度にOCRが起動します。 usage1: simple OC

1 Feb 13, 2022
Python Computer Vision from Scratch

This repository explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both f

Milaan Parmar / Милан пармар / _米兰 帕尔马 221 Dec 26, 2022
A fastai/PyTorch package for unpaired image-to-image translation.

Unpaired image-to-image translation A fastai/PyTorch package for unpaired image-to-image translation currently with CycleGAN implementation. This is a

Tanishq Abraham 120 Dec 02, 2022
CVPR 2021 Oral paper "LED2-Net: Monocular 360˚ Layout Estimation via Differentiable Depth Rendering" official PyTorch implementation.

LED2-Net This is PyTorch implementation of our CVPR 2021 Oral paper "LED2-Net: Monocular 360˚ Layout Estimation via Differentiable Depth Rendering". Y

Fu-En Wang 83 Jan 04, 2023
Motion Detection Squid Game with OpenCV Python

*Motion Detection Squid Game with OpenCV Python i am newbie in python. In this project I made a simple game to follow the trend about the red light gr

Nayan 17 Nov 22, 2022
Reference Code for AAAI-20 paper "Multi-Stage Self-Supervised Learning for Graph Convolutional Networks on Graphs with Few Labels"

Reference Code for AAAI-20 paper "Multi-Stage Self-Supervised Learning for Graph Convolutional Networks on Graphs with Few Labels" Please refer to htt

Ke Sun 1 Feb 14, 2022
This repository provides train&test code, dataset, det.&rec. annotation, evaluation script, annotation tool, and ranking.

SCUT-CTW1500 Datasets We have updated annotations for both train and test set. Train: 1000 images [images][annos] Additional point annotation for each

Yuliang Liu 600 Dec 18, 2022
Basic functions manipulating images using the OpenCV library

OpenCV Basic functions manipulating images using the OpenCV library. Reading Ima

Shatha Siala 3 Feb 17, 2022