Random Erasing Data Augmentation. Experiments on CIFAR10, CIFAR100 and Fashion-MNIST

Overview

Random Erasing Data Augmentation

===============================================================

Examples

black white random
i1 i2 i3
i4 i5 i6

This code has the source code for the paper "Random Erasing Data Augmentation".

If you find this code useful in your research, please consider citing:

@inproceedings{zhong2020random,
title={Random Erasing Data Augmentation},
author={Zhong, Zhun and Zheng, Liang and Kang, Guoliang and Li, Shaozi and Yang, Yi},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)},
year={2020}
}

Other re-implementations

[Official Torchvision in Transform]

[Pytorch: Random Erasing for ImageNet]

[Python Augmentor]

[Person_reID CamStyle]

[Person_reID_baseline + Random Erasing + Re-ranking]

[Keras re-implementation]

Installation

Requirements for Pytorch (see Pytorch installation instructions)

Examples:

CIFAR10

ResNet-20 baseline on CIFAR10: python cifar.py --dataset cifar10 --arch resnet --depth 20

ResNet-20 + Random Erasing on CIFAR10: python cifar.py --dataset cifar10 --arch resnet --depth 20 --p 0.5

CIFAR100

ResNet-20 baseline on CIFAR100: python cifar.py --dataset cifar100 --arch resnet --depth 20

ResNet-20 + Random Erasing on CIFAR100: python cifar.py --dataset cifar100 --arch resnet --depth 20 --p 0.5

Fashion-MNIST

ResNet-20 baseline on Fashion-MNIST: python fashionmnist.py --dataset fashionmnist --arch resnet --depth 20

ResNet-20 + Random Erasing on Fashion-MNIST: python fashionmnist.py --dataset fashionmnist --arch resnet --depth 20 --p 0.5

Other architectures

For ResNet: --arch resnet --depth (20, 32, 44, 56, 110)

For WRN: --arch wrn --depth 28 --widen-factor 10

Our results

You can reproduce the results in our paper:

 CIFAR10 CIFAR10 CIFAR100 CIFAR100 Fashion-MNIST Fashion-MNIST
Models  Base. +RE Base. +RE Base. +RE
ResNet-20  7.21 6.73 30.84 29.97 4.39 4.02
ResNet-32  6.41 5.66 28.50 27.18 4.16 3.80
ResNet-44  5.53 5.13 25.27 24.29 4.41 4.01
ResNet-56  5.31 4.89 24.82 23.69 4.39 4.13
ResNet-110  5.10 4.61 23.73 22.10 4.40 4.01
WRN-28-10  3.80 3.08 18.49 17.73 4.01 3.65

NOTE THAT, if you use the latest released Fashion-MNIST, the performance of Baseline and RE will slightly lower than the results reported in our paper. Please refer to the issue.

If you have any questions about this code, please do not hesitate to contact us.

Zhun Zhong

Liang Zheng

Owner
Zhun Zhong
Zhun Zhong
LIVECell - A large-scale dataset for label-free live cell segmentation

LIVECell dataset This document contains instructions of how to access the data associated with the submitted manuscript "LIVECell - A large-scale data

Sartorius Corporate Research 112 Jan 07, 2023
A PyTorch implementation of " EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks."

EfficientNet A PyTorch implementation of EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. [arxiv] [Official TF Repo] Implemen

AhnDW 298 Dec 10, 2022
Learning-Augmented Dynamic Power Management

Learning-Augmented Dynamic Power Management This repository contains source code accompanying paper Learning-Augmented Dynamic Power Management with M

Adam 0 Feb 22, 2022
Training data extraction on GPT-2

Training data extraction from GPT-2 This repository contains code for extracting training data from GPT-2, following the approach outlined in the foll

Florian Tramer 62 Dec 07, 2022
A tool to analyze leveraged liquidity mining and find optimal option combination for hedging.

LP-Option-Hedging Description A Python program to analyze leveraged liquidity farming/mining and find the optimal option combination for hedging imper

Aureliano 18 Dec 19, 2022
Large-Scale Pre-training for Person Re-identification with Noisy Labels (LUPerson-NL)

LUPerson-NL Large-Scale Pre-training for Person Re-identification with Noisy Labels (LUPerson-NL) The repository is for our CVPR2022 paper Large-Scale

43 Dec 26, 2022
Evaluation framework for testing segmentation networks in PyTorch

Evaluation framework for testing segmentation networks in PyTorch. What segmentation network to choose for next Kaggle competition? This benchmark knows the answer!

Eugene Khvedchenya 37 Apr 27, 2022
A command line simple note taking app

Why yet another note taking program? note was designed with a very specific target in mind: me, and my 2354 scraps of paper. It runs from the command

64 Nov 20, 2022
Boundary-preserving Mask R-CNN (ECCV 2020)

BMaskR-CNN This code is developed on Detectron2 Boundary-preserving Mask R-CNN ECCV 2020 Tianheng Cheng, Xinggang Wang, Lichao Huang, Wenyu Liu Video

Hust Visual Learning Team 178 Nov 28, 2022
一个多模态内容理解算法框架,其中包含数据处理、预训练模型、常见模型以及模型加速等模块。

Overview 架构设计 插件介绍 安装使用 框架简介 方便使用,支持多模态,多任务的统一训练框架 能力列表: bert + 分类任务 自定义任务训练(插件注册) 框架设计 框架采用分层的思想组织模型训练流程。 DATA 层负责读取用户数据,根据 field 管理数据。 Parser 层负责转换原

Tencent 265 Dec 22, 2022
This is the code of paper ``Contrastive Coding for Active Learning under Class Distribution Mismatch'' with python.

Contrastive Coding for Active Learning under Class Distribution Mismatch Official PyTorch implementation of ["Contrastive Coding for Active Learning u

21 Dec 22, 2022
Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network"

M3D-VTON: A Monocular-to-3D Virtual Try-On Network Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network" Paper | Suppl

109 Dec 29, 2022
Sharpness-Aware Minimization for Efficiently Improving Generalization

Sharpness-Aware-Minimization-TensorFlow This repository provides a minimal implementation of sharpness-aware minimization (SAM) (Sharpness-Aware Minim

Sayak Paul 54 Dec 08, 2022
A PyTorch Implementation of ViT (Vision Transformer)

ViT - Vision Transformer This is an implementation of ViT - Vision Transformer by Google Research Team through the paper "An Image is Worth 16x16 Word

Quan Nguyen 7 May 11, 2022
Repo for the paper "DiLBERT: Cheap Embeddings for Disease Related Medical NLP"

DiLBERT Repo for the paper "DiLBERT: Cheap Embeddings for Disease Related Medical NLP" Pretrained Model The pretrained model presented in the paper is

Kevin Roitero 2 Dec 15, 2022
PyTorch implementation of "Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning"

Transparency-by-Design networks (TbD-nets) This repository contains code for replicating the experiments and visualizations from the paper Transparenc

David Mascharka 351 Nov 18, 2022
PyTorch implementation of the Value Iteration Networks (VIN) (NIPS '16 best paper)

Value Iteration Networks in PyTorch Tamar, A., Wu, Y., Thomas, G., Levine, S., and Abbeel, P. Value Iteration Networks. Neural Information Processing

LEI TAI 75 Nov 24, 2022
On Evaluation Metrics for Graph Generative Models

On Evaluation Metrics for Graph Generative Models Authors: Rylee Thompson, Boris Knyazev, Elahe Ghalebi, Jungtaek Kim, Graham Taylor This is the offic

13 Jan 07, 2023
Course about deep learning for computer vision and graphics co-developed by YSDA and Skoltech.

Deep Vision and Graphics This repo supplements course "Deep Vision and Graphics" taught at YSDA @fall'21. The course is the successor of "Deep Learnin

Yandex School of Data Analysis 160 Jan 02, 2023
LogAvgExp - Pytorch Implementation of LogAvgExp

LogAvgExp - Pytorch Implementation of LogAvgExp for Pytorch Install $ pip instal

Phil Wang 31 Oct 14, 2022