Learning to Self-Train for Semi-Supervised Few-Shot

Overview

Learning to Self-Train for Semi-Supervised Few-Shot Classification

LICENSE Python TensorFlow

This repository contains the TensorFlow implementation for NeurIPS 2019 Paper "Learning to Self-Train for Semi-Supervised Few-Shot Classification".

Check the few-shot classification leaderboard.

Summary

Installation

In order to run this repository, we advise you to install python 2.7 or 3.5 and TensorFlow 1.3.0 with Anaconda.

You may download Anaconda and read the installation instruction on their official website: https://www.anaconda.com/download/

Create a new environment and install tensorflow on it:

conda create --name lst-tf python=2.7
conda activate lst-tf
conda install tensorflow-gpu=1.3.0

Install other requirements:

pip install scipy tqdm opencv-python pillow matplotlib

Clone this repository:

git clone https://github.com/xinzheli1217/learning-to-self-train.git 
cd learning-to-self-train

Project Architecture

.
├── data_generator              # dataset generator 
|   └── meta_data_generator.py  # data genertor for meta-train phase
├── models                      # tensorflow model files 
|   ├── models.py               # resnet12 CNN class
|   └── meta_model_LST.py       # semi-supervised meta-train model class
├── trainer                     # tensorflow trianer files  
|   └── meta_LST.py             # semi-supervised meta-train trainer class
├── utils                       # a series of tools used in this repo
|   └── misc.py                 # miscellaneous tool functions
| 
├── data                        # the folder containing datasets for experiments
├── pretrain_weights_dir        # the folder containing MTL pre-training weights
├── weights_saving_dir          # the folder containing meta-training weights
├── test_output_dir             # the folder containing meta-testing files
├── filenames_and_labels        # the folder containing image file paths and labels for experiments
|
├── exp_train.py                # the python file with main function and parameter settings for meta-training
└── exp_test.py                 # the python file with main function and parameter settings for meta-testing

Running Experiments

First, download our processed images: miniImagenet[Download Page] or tieredImagenet[Download Page], move the unziped folder to ./data. And then download the pre-trained models: miniImagenet[Download Page] or tieredImagenet[Download Page], move the unziped folder to ./pretrain_weights_dir.

Training from Pre-Trained Models

Run semi-supervised meta-train phase (e.g. 𝑚𝑖𝑛𝑖ImageNet, 1-shot) :

python exp_train.py --shot_num=1 --dataset='miniImagenet' --pretrain_class_num=64 --nb_ul_samples=10 --metatrain_iterations=15000 --exp_name='LST_mini_1_shot'

Run semi-supervised meta-test phase (e.g. 𝑚𝑖𝑛𝑖ImageNet, 1-shot) :

python exp_test.py --shot_num=1 --dataset='miniImagenet' --pretrain_class_num=64 --use_distractors=False --nb_ul_samples=100 --unfiles_num=10 --test_iter=15000 --recurrent_stage_nums=6 --nums_in_folders=30 --hard_selection=20 --exp_name='LST_mini_1_shot' 

Hyperparameters and Options

There are some main hyperparameters used in the experiments, you can edit them in the exp_train.py and the exp_test.py file for meta-train and meta-test phase respectively. There are two kinds of hyperparameters: (1) common hyperparameters that shared with meta-train and meta-test, (2) test-specific hyperparameters that used for recurrent self-training process in meta-test.

  • Common hyperparameters:

    • way_num number of classes
    • shot_num number of examples per class
    • dataset dataset used in the experiment (miniImagenet or tieredImagenet)
    • pretrain_class_num number of meta-train classes
    • exp_name name for the current experiment
    • meta_batch_size number of tasks sampled per meta-update in meta-train phase
    • base_lr step size alpha for inner gradient update
    • meta_lr the meta learning rate for SS and initial model parameters
    • min_meta_lr the min meta learning rate for all meta-parameters
    • swn_lr the meta learning rate for SWN
    • nb_ul_samples number of unlabeled examples per class
    • re_train_epoch_num number of re-training inner gradient updates
    • train_base_epoch_num number of total inner gradient updates during train (meta-train only)
    • test_base_epoch_num number of total inner gradient updates during test (meta-test only)
  • Test-specific hyperparameters:

    • use_distractors if using distractor classes during meta-test
    • num_dis number of distracting classes used for meta-testing
    • unfiles_num number of unlabeled sample files used in the experiment (There are 10 unlabeled samples per class in each file)
    • recurrent_stage_nums number of recurrent stages used during meta-test
    • local_update_num number of inner gradient updates used in each recurrent stage
    • nums_in_folders number of unlabeled samples (per class) used in each recurrent stage
    • hard_selection number of remaining samples (per class) after applying hard-selection

If you want to change other settings, please see the comments and descriptions in exp_train.py and exp_test.py.

Performance

(%) 𝑚𝑖𝑛𝑖 𝒕𝒊𝒆𝒓𝒆𝒅 𝑚𝑖𝑛𝑖 (w/D) 𝒕𝒊𝒆𝒓𝒆𝒅 (w/D)
1-shot 70.1 ± 1.9 77.7 ± 1.6 64.1 ± 1.9 73.5 ± 1.6
5-shot 78.7 ± 0.8 85.2 ± 0.8 77.4 ± 1.8 83.4 ± 0.8

Citation

Please cite our paper if it is helpful to your work:

@inproceedings{li2019lst,
  title={Learning to Self-Train for Semi-Supervised Few-Shot Classification},
  author = {Li, Xinzhe and Sun, Qianru and Liu, Yaoyao and Zhou, Qin and Zheng, Shibao and Chua, Tat-Seng and Schiele, Bernt},
  booktitle={NeurIPS},
  year={2019}
}

Acknowledgements

Our implementations use the source code from the following repositories and users:

A framework for GPU based high-performance medical image processing and visualization

FAST is an open-source cross-platform framework with the main goal of making it easier to do high-performance processing and visualization of medical images on heterogeneous systems utilizing both mu

Erik Smistad 315 Dec 30, 2022
SparseML is a libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models

SparseML is a toolkit that includes APIs, CLIs, scripts and libraries that apply state-of-the-art sparsification algorithms such as pruning and quantization to any neural network. General, recipe-dri

Neural Magic 1.5k Dec 30, 2022
Crosslingual Segmental Language Model

Crosslingual Segmental Language Model This repository contains the code from Multilingual unsupervised sequence segmentation transfers to extremely lo

C.M. Downey 1 Jun 13, 2022
LabelImg is a graphical image annotation tool.

LabelImgPlus LabelImg is a graphical image annotation tool. This project is not updated with new functions now. More functions are supported with Labe

lzx1413 200 Dec 20, 2022
General Virtual Sketching Framework for Vector Line Art (SIGGRAPH 2021)

General Virtual Sketching Framework for Vector Line Art - SIGGRAPH 2021 Paper | Project Page Outline Dependencies Testing with Trained Weights Trainin

Haoran MO 118 Dec 27, 2022
Code for "Adversarial attack by dropping information." (ICCV 2021)

AdvDrop Code for "AdvDrop: Adversarial Attack to DNNs by Dropping Information(ICCV 2021)." Human can easily recognize visual objects with lost informa

Ranjie Duan 52 Nov 10, 2022
Super Resolution for images using deep learning.

Neural Enhance Example #1 — Old Station: view comparison in 24-bit HD, original photo CC-BY-SA @siv-athens. As seen on TV! What if you could increase

Alex J. Champandard 11.7k Dec 29, 2022
Learning to Adapt Structured Output Space for Semantic Segmentation, CVPR 2018 (spotlight)

Learning to Adapt Structured Output Space for Semantic Segmentation Pytorch implementation of our method for adapting semantic segmentation from the s

Yi-Hsuan Tsai 782 Dec 30, 2022
Randomized Correspondence Algorithm for Structural Image Editing

===================================== README: Inpainting based PatchMatch ===================================== @Author: Younesse ANDAM @Conta

Younesse 116 Dec 24, 2022
This program was designed to detect whether someone is wearing a facemask through a live video stream.

This program was designed to detect whether someone is wearing a facemask through a live video stream. A custom lightweight CNN trained with TensorFlow on a public dataset provided by Kaggle is used

0 Apr 02, 2022
General Multi-label Image Classification with Transformers

General Multi-label Image Classification with Transformers Jack Lanchantin, Tianlu Wang, Vicente Ordóñez Román, Yanjun Qi Conference on Computer Visio

QData 154 Dec 21, 2022
This thesis is mainly concerned with state-space methods for a class of deep Gaussian process (DGP) regression problems

Doctoral dissertation of Zheng Zhao This thesis is mainly concerned with state-space methods for a class of deep Gaussian process (DGP) regression pro

Zheng Zhao 21 Nov 14, 2022
Improving Calibration for Long-Tailed Recognition (CVPR2021)

MiSLAS Improving Calibration for Long-Tailed Recognition Authors: Zhisheng Zhong, Jiequan Cui, Shu Liu, Jiaya Jia [arXiv] [slide] [BibTeX] Introductio

DV Lab 116 Dec 20, 2022
A python script to lookup Passport Index Dataset

visa-cli A python script to lookup Passport Index Dataset Installation pip install visa-cli Usage usage: visa-cli [-h] [-d DESTINATION_COUNTRY] [-f]

rand-net 16 Oct 18, 2022
In Search of Probeable Generalization Measures

In Search of Probeable Generalization Measures Exciting News! In Search of Probeable Generalization Measures has been accepted to the International Co

Mahdi S. Hosseini 6 Sep 11, 2022
moving object detection for satellite videos.

DSFNet: Dynamic and Static Fusion Network for Moving Object Detection in Satellite Videos Algorithm Introduction DSFNet: Dynamic and Static Fusion Net

xiaochao 39 Dec 16, 2022
Train DeepLab for Semantic Image Segmentation

Train DeepLab for Semantic Image Segmentation Martin Kersner, [email protected]

Martin Kersner 172 Dec 14, 2022
Learning from graph data using Keras

Steps to run = Download the cora dataset from this link : https://linqs.soe.ucsc.edu/data unzip the files in the folder input/cora cd code python eda

Mansar Youness 64 Nov 16, 2022
EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation

EdiBERT, a generative model for image editing EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation. The

16 Dec 07, 2022
Code for our paper 'Generalized Category Discovery'

Generalized Category Discovery This repo is a placeholder for code for our paper: Generalized Category Discovery Abstract: In this paper, we consider

107 Dec 28, 2022