Semi-Supervised Learning for Fine-Grained Classification

Overview

Semi-Supervised Learning for Fine-Grained Classification

This repo contains the code of:

  • A Realistic Evaluation of Semi-Supervised Learning for Fine-Grained Classification, Jong-Chyi Su, Zezhou Cheng, and Subhransu Maji, CVPR 2021. [paper, poster, slides]
  • Semi-Supervised Learning with Taxonomic Labels, Jong-Chyi Su and Subhransu Maji, BMVC 2021. [paper, slides]

Preparing Datasets and Splits

We used the following datasets in the paper:

In addition the repository contains a new Semi-iNat dataset corresponding to the FGVC8 semi-supervised challenge:

  • Semi-iNat: This is a new dataset for the Semi-iNat Challenge at FGVC8 workshop at CVPR 2021. Different from Semi-Aves, Semi-iNat has more species from different kingdoms, and does not include in or out-of-domain label. For more details please see the challenge website.

The splits of each of these datasets can be found under data/${dataset}/${split}.txt corresponding to:

  • l_train -- labeled in-domain data
  • u_train_in -- unlabeled in-domain data
  • u_train_out -- unlabeled out-of-domain data
  • u_train (combines u_train_in and u_train_out)
  • val -- validation set
  • l_train_val (combines l_train and val)
  • test -- test set

Each line in the text file has a filename and the corresponding class label.

Please download the datasets from the corresponding websites. For Semi-Aves, put the data under data/semi_aves. FFor Semi-Fungi and Semi-CUB, download the images and put them under data/semi_fungi/images and data/cub/images.

Note 1: For the experiments on Semi-Fungi reported in the paper, the images are resized to a maximum of 300px for each side.
Note 2: We reported the results of another split of Semi-Aves in the appendix (for cross-validation), but we do not release the labels because it will leak the labels for unlabeled data.
Note 3: We also provide the species names of Semi-Aves under data/semi_aves_species_names.txt, and the species names of Semi-Fungi. The names were not shared in the competetion.

Training and Evaluation (CVPR paper)

We provide the code for all the methods included in the paper, except for FixMatch and MoCo. This includes methods of supervised training, self-training, PL, and curriculum PL. This code is developed based on this PyTorch implementation.

For FixMatch, we used the official Tensorflow code and an unofficial PyTorch code to reproduce the results. For MoCo, we use this PyContrast implementation.

To train the model, use the following command:

CUDA_VISIBLE_DEVICES=0 python run_train.py --task ${task} --init ${init} --alg ${alg} --unlabel ${unlabel} --num_iter ${num_iter} --warmup ${warmup} --lr ${lr} --wd ${wd} --batch_size ${batch_size} --exp_dir ${exp_dir} --MoCo ${MoCo} --alpha ${alpha} --kd_T ${kd_T} --trainval

For example, to train a supervised model initialized from a inat pre-trained model on semi-aves dataset with in-domain unlabeled data only, you will use:

CUDA_VISIBLE_DEVICES=0 python run_train.py --task semi_aves --init inat --alg supervised --unlabel in --num_iter 10000 --lr 1e-3 --wd 1e-4 --exp_dir semi_aves_supervised_in --MoCo false --trainval

Note that for experiments of Semi-Aves and Semi-Fungi in the paper, we combined the training and val set for training (use args --trainval).
For all the hyper-parameters, please see the following shell scripts:

  • exp_sup.sh for supervised training
  • exp_PL.sh for pseudo-labeling
  • exp_CPL.sh for curriculum pseudo-labeling
  • exp_MoCo.sh for MoCo + supervised training
  • exp_distill.sh for self-training and MoCo + self-training

Training and Evaluation (BMVC paper)

In our BMVC paper, we added the hierarchical supervision of coarse labels on top of semi-supervised learning.

To train the model, use the following command:

CUDA_VISIBLE_DEVICES=0 python run_train_hierarchy.py --task ${task} --init ${init} --alg ${alg} --unlabel ${unlabel} --num_iter ${num_iter} --warmup ${warmup} --lr ${lr} --wd ${wd} --batch_size ${batch_size} --exp_dir ${exp_dir} --MoCo ${MoCo} --alpha ${alpha} --kd_T ${kd_T} --level ${level}

The following are the arguments different from the above:

  • ${level}: choose from {genus, kingdom, phylum, class, order, family, species}
  • ${alg}: choose from {hierarchy, PL_hierarchy, distill_hierarchy}

For the settings and hyper-parameters, please see exp_hierarchy.sh.

Pre-Trained Models

We provide supervised training models, MoCo pre-trained models, as well as MoCo + supervised training models, for both Semi-Aves and Semi-Fungi datasets. Here are the links to download the model:

http://vis-www.cs.umass.edu/semi-inat-2021/ssl_evaluation/models/${method}/${dataset}_${initialization}_${unlabel}.pth.tar

  • ${method}: choose from {supervised, MoCo_init, MoCo_supervised}
  • ${dataset}: choose from {semi_aves, semi_fungi}
  • ${initialization}: choose from {scratch, imagenet, inat}
  • ${unlabel}: choose from {in, inout}

You need these models for self-training mothods. For example, the teacher model is initialized from model/supervised for self-training. For MoCo + self-training, the teacher model is initialized from model/MoCo_supervised, and the student model is initialized from model/MoCo_init.

We also provide the pre-trained ResNet-50 model of iNaturalist-18. This model was trained using this github code.

Related Challenges

Citation

@inproceedings{su2021realistic,
  author    = {Jong{-}Chyi Su and Zezhou Cheng and Subhransu Maji},
  title     = {A Realistic Evaluation of Semi-Supervised Learning for Fine-Grained Classification},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year      = {2021}
}

@inproceedings{su2021taxonomic,
  author    = {Jong{-}Chyi Su and Subhransu Maji},
  title     = {Semi-Supervised Learning with Taxonomic Labels},
  booktitle = {British Machine Vision Conference (BMVC)},
  year      = {2021}
}

@article{su2021semi_iNat,
      title={The Semi-Supervised iNaturalist Challenge at the FGVC8 Workshop}, 
      author={Jong-Chyi Su and Subhransu Maji},
      year={2021},
      journal={arXiv preprint arXiv:2106.01364}
}

@article{su2021semi_aves,
      title={The Semi-Supervised iNaturalist-Aves Challenge at FGVC7 Workshop}, 
      author={Jong-Chyi Su and Subhransu Maji},
      year={2021},
      journal={arXiv preprint arXiv:2103.06937}
}
Official Pytorch Implementation for Splicing ViT Features for Semantic Appearance Transfer presenting Splice

Splicing ViT Features for Semantic Appearance Transfer [Project Page] Splice is a method for semantic appearance transfer, as described in Splicing Vi

Omer Bar Tal 253 Jan 06, 2023
Code for "Diffusion is All You Need for Learning on Surfaces"

Source code for "Diffusion is All You Need for Learning on Surfaces", by Nicholas Sharp Souhaib Attaiki Keenan Crane Maks Ovsjanikov NOTE: the linked

Nick Sharp 247 Dec 28, 2022
An Image compression simulator that uses Source Extractor and Monte Carlo methods to examine the post compressive effects different compression algorithms have.

ImageCompressionSimulation An Image compression simulator that uses Source Extractor and Monte Carlo methods to examine the post compressive effects o

James Park 1 Dec 11, 2021
MINOS: Multimodal Indoor Simulator

MINOS Simulator MINOS is a simulator designed to support the development of multisensory models for goal-directed navigation in complex indoor environ

194 Dec 27, 2022
PoolFormer: MetaFormer is Actually What You Need for Vision

PoolFormer: MetaFormer is Actually What You Need for Vision (arXiv) This is a PyTorch implementation of PoolFormer proposed by our paper "MetaFormer i

Sea AI Lab 1k Dec 30, 2022
The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

Ren Yurui 261 Jan 09, 2023
Keras code and weights files for popular deep learning models.

Trained image classification models for Keras THIS REPOSITORY IS DEPRECATED. USE THE MODULE keras.applications INSTEAD. Pull requests will not be revi

François Chollet 7.2k Dec 29, 2022
Official repo for our 3DV 2021 paper "Monocular 3D Reconstruction of Interacting Hands via Collision-Aware Factorized Refinements".

Monocular 3D Reconstruction of Interacting Hands via Collision-Aware Factorized Refinements Yu Rong, Jingbo Wang, Ziwei Liu, Chen Change Loy Paper. Pr

Yu Rong 41 Dec 13, 2022
python 93% acc. CNN Dogs Vs Cats ( Pytorch )

English | 简体中文(测试中...敬请期待) Cnn-Classification-Dog-Vs-Cat 猫狗辨别 (pytorch版本) CNN Resnet18 的猫狗分类器,基于ResNet及其变体网路系列,对于一般的图像识别任务表现优异,模型精准度高达93%(小型样本)。 项目制作于

apple ye 1 May 22, 2022
This is a beginner-friendly repo to make a collection of some unique and awesome projects. Everyone in the community can benefit & get inspired by the amazing projects present over here.

Awesome-Projects-Collection Quality over Quantity :) What to do? Add some unique and amazing projects as per your favourite tech stack for the communi

Rohan Sharma 178 Jan 01, 2023
Exploring Classification Equilibrium in Long-Tailed Object Detection, ICCV2021

Exploring Classification Equilibrium in Long-Tailed Object Detection (LOCE, ICCV 2021) Paper Introduction The conventional detectors tend to make imba

52 Nov 21, 2022
Training Very Deep Neural Networks Without Skip-Connections

DiracNets v2 update (January 2018): The code was updated for DiracNets-v2 in which we removed NCReLU by adding per-channel a and b multipliers without

Sergey Zagoruyko 585 Oct 12, 2022
Weight initialization schemes for PyTorch nn.Modules

nninit Weight initialization schemes for PyTorch nn.Modules. This is a port of the popular nninit for Torch7 by @kaixhin. ##Update This repo has been

Alykhan Tejani 69 Jan 26, 2021
Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

El Bruno 3 Mar 30, 2022
This repository contain code on Novelty-Driven Binary Particle Swarm Optimisation for Truss Optimisation Problems.

This repository contain code on Novelty-Driven Binary Particle Swarm Optimisation for Truss Optimisation Problems. The main directory include the code

0 Dec 23, 2021
PyTorch Kafka Dataset: A definition of a dataset to get training data from Kafka.

PyTorch Kafka Dataset: A definition of a dataset to get training data from Kafka.

ERTIS Research Group 7 Aug 01, 2022
A PyTorch port of the Neural 3D Mesh Renderer

Neural 3D Mesh Renderer (CVPR 2018) This repo contains a PyTorch implementation of the paper Neural 3D Mesh Renderer by Hiroharu Kato, Yoshitaka Ushik

Daniilidis Group University of Pennsylvania 1k Jan 09, 2023
Neural Oblivious Decision Ensembles

Neural Oblivious Decision Ensembles A supplementary code for anonymous ICLR 2020 submission. What does it do? It learns deep ensembles of oblivious di

25 Sep 21, 2022
The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022

DG-TrajGen The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022. Our Meth

Wang 25 Sep 26, 2022
Self-Adaptable Point Processes with Nonparametric Time Decays

NPPDecay This is our implementation for the paper Self-Adaptable Point Processes with Nonparametric Time Decays, by Zhimeng Pan, Zheng Wang, Jeff M. P

zpan 2 Sep 24, 2022