Distance-Ratio-Based Formulation for Metric Learning

Overview

Distance-Ratio-Based Formulation for Metric Learning

Environment

Preparing datasets

CUB

  • Change directory to /filelists/CUB
  • run source ./download_CUB.sh

One might need to manually download CUB data from http://www.vision.caltech.edu/visipedia-data/CUB-200-2011/CUB_200_2011.tgz.

mini-ImageNet

  • Change directory to /filelists/miniImagenet
  • run source ./download_miniImagenet.sh (WARNING: This would download the 155G ImageNet dataset.)

To only download 'miniImageNet dataset' and not the whole 155G ImageNet dataset:

(Download 'csv' files from the codes in /filelists/miniImagenet/download_miniImagenet.sh. Then, do the following.)

First, download zip file from https://drive.google.com/file/d/0B3Irx3uQNoBMQ1FlNXJsZUdYWEE/view (It is from https://github.com/oscarknagg/few-shot). After unzipping the zip file at /filelists/miniImagenet, run a script /filelists/miniImagenet/prepare_mini_imagenet.py which is modified from https://github.com/oscarknagg/few-shot/blob/master/scripts/prepare_mini_imagenet.py. Then, run /filelists/miniImagenet/write_miniImagenet_filelist2.py.

Train

Run python ./train.py --dataset [DATASETNAME] --model [BACKBONENAME] --method [METHODNAME] --train_aug [--OPTIONARG]

To also save training analyses results, for example, run python ./train.py --dataset miniImagenet --model Conv4 --method protonet_S --train_aug --n_shot 5 --train_n_way 5 --test_n_way 5 > record/miniImagenet_Conv4_proto_S_5s5w.txt

train_models.ipynb contains codes for our experiments.

Save features

Save the extracted feature before the classifaction layer to increase test speed.

For instance, run python ./save_features.py --dataset miniImagenet --model Conv4 --method protonet_S --train_aug --n_shot 5 --train_n_way 5

Test

For example, run python ./test.py --dataset miniImagenet --model Conv4 --method protonet_S --train_aug --n_shot 5 --train_n_way 5 --test_n_way 5

Analyze training

Run /record/analyze_training_1shot.ipynb and /record/analyze_training_5shot.ipynb to analyze training results (norm ratio, con-alpha ratio, div-alpha ratio, and con-div ratio)

Results

The test results will be recorded in ./record/results.txt

Visual comparison of softmax-based and distance-ratio-based (DR) formulation

The following images visualize confidence scores of red class when the three points are the representing points of red, green, and blue classes.

Softmax-based formulation DR formulation

References and licence

Our repository (a set of codes) is forked from an original repository (https://github.com/wyharveychen/CloserLookFewShot) and codes are under the same licence (LICENSE.txt) as the original repository except for the following.

/filelists/miniImagenet/prepare_mini_imagenet.py file is modifed from https://github.com/oscarknagg/few-shot. It is under a different licence in /filelists/miniImagenet/prepare_mini_imagenet.LICENSE

Copyright and licence notes (including the copyright note in /data/additional_transforms.py) are from the original repositories (https://github.com/wyharveychen/CloserLookFewShot and https://github.com/oscarknagg/few-shot).

Modifications

List of modified or added files (or folders) compared to the original repository (https://github.com/wyharveychen/CloserLookFewShot):

io_utils.py backbone.py configs.py train.py save_features.py test.py utils.py README.md train_models.ipynb /methods/__init__.py /methods/protonet_S.py /methods/meta_template.py /methods/protonet_DR.py /methods/softmax_1nn.py /methods/DR_1nn.py /models/ /filelists/miniImagenet/prepare_mini_imagenet.py /filelists/miniImagenet/prepare_mini_imagenet.LICENSE /filelists/miniImagenet/write_miniImagenet_filelist2.py /record/ /record/preprocessed/ /record/analyze_training_1shot.ipynb /record/analyze_training_5shot.ipynb

My (Hyeongji Kim) main contributions (modifications) are in /methods/meta_template.py, /methods/protonet_DR.py, /methods/softmax_1nn.py, /methods/DR_1nn.py, /record/analyze_training_1shot.ipynb, and /record/analyze_training_5shot.ipynb.

Owner
Hyeongji Kim
Hyeongji Kim
Baseline powergrid model for NY

Baseline-powergrid-model-for-NY Table of Contents About The Project Built With Usage License Contact Acknowledgements About The Project As the urgency

Anderson Energy Lab at Cornell 6 Nov 24, 2022
PyTorch implementation of MICCAI 2018 paper "Liver Lesion Detection from Weakly-labeled Multi-phase CT Volumes with a Grouped Single Shot MultiBox Detector"

Grouped SSD (GSSD) for liver lesion detection from multi-phase CT Note: the MICCAI 2018 paper only covers the multi-phase lesion detection part of thi

Sang-gil Lee 36 Oct 12, 2022
Art Project "Schrödinger's Game of Life"

Repo of the project "Team Creative Quantum AI: Schrödinger's Game of Life" Installation new conda env: conda create --name qcml python=3.8 conda activ

ℍ◮ℕℕ◭ℍ ℝ∈ᛔ∈ℝ 2 Sep 15, 2022
EqGAN - Improving GAN Equilibrium by Raising Spatial Awareness

EqGAN - Improving GAN Equilibrium by Raising Spatial Awareness Improving GAN Equilibrium by Raising Spatial Awareness Jianyuan Wang, Ceyuan Yang, Ying

GenForce: May Generative Force Be with You 149 Dec 19, 2022
Implementation of GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation (ICLR 2022).

GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation [OpenReview] [arXiv] [Code] The official implementation of GeoDiff: A Geome

Minkai Xu 155 Dec 26, 2022
Code for models used in Bashiri et al., "A Flow-based latent state generative model of neural population responses to natural images".

A Flow-based latent state generative model of neural population responses to natural images Code for "A Flow-based latent state generative model of ne

Sinz Lab 5 Aug 26, 2022
MicRank is a Learning to Rank neural channel selection framework where a DNN is trained to rank microphone channels.

MicRank: Learning to Rank Microphones for Distant Speech Recognition Application Scenario Many applications nowadays envision the presence of multiple

Samuele Cornell 20 Nov 10, 2022
Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis

Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis, including human motion imitation, appearance transfer, and novel view synthesis. Currently the paper is under review

2.3k Jan 05, 2023
Pytorch Implementation of rpautrat/SuperPoint

SuperPoint-Pytorch (A Pure Pytorch Implementation) SuperPoint: Self-Supervised Interest Point Detection and Description Thanks This work is based on:

76 Dec 27, 2022
Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm under Mixed Illumination

Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm under Mixed Illumination (ICCV 2021) Dataset License This work is l

DongYoung Kim 33 Jan 04, 2023
Detecting Human-Object Interactions with Object-Guided Cross-Modal Calibrated Semantics

[AAAI2022] Detecting Human-Object Interactions with Object-Guided Cross-Modal Calibrated Semantics Overall pipeline of OCN. Paper Link: [arXiv] [AAAI

13 Nov 21, 2022
Adaptive, interpretable wavelets across domains (NeurIPS 2021)

Adaptive wavelets Wavelets which adapt given data (and optionally a pre-trained model). This yields models which are faster, more compressible, and mo

Yu Group 50 Dec 16, 2022
[SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars

AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars Fangzhou Hong1*  Mingyuan Zhang1*  Liang Pan1  Zhongang Cai1,2,3  Lei Yang2 

Fangzhou Hong 749 Jan 04, 2023
Real-time VIBE: Frame by Frame Inference of VIBE (Video Inference for Human Body Pose and Shape Estimation)

Real-time VIBE Inference VIBE frame-by-frame. Overview This is a frame-by-frame inference fork of VIBE at [https://github.com/mkocabas/VIBE]. Usage: i

23 Jul 02, 2022
Bald-to-Hairy Translation Using CycleGAN

GANiry: Bald-to-Hairy Translation Using CycleGAN Official PyTorch implementation of GANiry. GANiry: Bald-to-Hairy Translation Using CycleGAN, Fidan Sa

Fidan Samet 10 Oct 27, 2022
Machine Translation Implement By Bi-GRU And Transformer

Seq2Seq Translation Implement By Bidirectional GRU And Transformer In Pytorch Before You Run The Code You should download the data through the link be

He Wang 2 Oct 27, 2021
Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space"

MotionCLIP Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space". Please visit our webpage for mor

Guy Tevet 173 Dec 26, 2022
Normalization Matters in Weakly Supervised Object Localization (ICCV 2021)

Normalization Matters in Weakly Supervised Object Localization (ICCV 2021) 99% of the code in this repository originates from this link. ICCV 2021 pap

Jeesoo Kim 10 Feb 01, 2022
Codes and Data Processing Files for our paper.

Code Scripts and Processing Files for EEG Sleep Staging Paper 1. Folder Tree ./src_preprocess (data preprocessing files for SHHS and Sleep EDF) sleepE

Chaoqi Yang 18 Dec 12, 2022
The code for paper "Learning Implicit Fields for Generative Shape Modeling".

implicit-decoder The tensorflow code for paper "Learning Implicit Fields for Generative Shape Modeling", Zhiqin Chen, Hao (Richard) Zhang. Project pag

Zhiqin Chen 353 Dec 30, 2022