MatchGAN: A Self-supervised Semi-supervised Conditional Generative Adversarial Network

Related tags

Deep LearningMatchGAN
Overview

MatchGAN: A Self-supervised Semi-supervised Conditional Generative Adversarial Network

This repository is the official implementation of MatchGAN: A Self-supervised Semi-supervised Conditional Generative Adversarial Network.

alt text

This repository is built upon the framework of StarGAN.

1. Cloning the repository

Clone the repository and navigate to it.

$ git clone https://github.com/justin941208/MatchGAN.git
$ cd MatchGAN/

2. Installing requirements

The following libraries should be separately installed. Instructions are available on their respective websites:

Additional requirements can be installed by running:

pip install -r requirements.txt

To evaluate MatchGAN using GAN-train and GAN-test, the following files should be downloaded and unzipped directly under MatchGAN/.

2. Downloading the datasets

To download the CelebA dataset:

$ bash download.sh

In addition, the partition file list_eval_partition.txt should be downloaded from the official CelebA google drive and placed immediately under the directory ./data/celeba/.

To download the RaFD dataset, one must request access to the dataset from the Radboud Faces Database website. Once all the image files are obtained, they need to be placed under the subdirectory ./data/RaFD/data. To preprocess the dataset, run the following command:

$ python preprocess_rafd.py

This will crop all images to 256x256 (centred on face) and split the data into 90% for training and 10% for testing.

3. Training

The command format for training MatchGAN is given by:

$ ./run [dataset] [mode] [labelled percentage] [device]

For example, to train MatchGAN on CelebA with 5% of the training examples labelled on GPU 0, run the following command:

$ ./run celeba train 5 0

To train on RaFD, simply replace "celeba" by "rafd".

4. Testing and evaluating

To test MatchGAN following the above example on CelebA, run the command

$ ./run celeba test 5 0

This will generate synthetic images from the test set and save them to the directory ./matchgan_celeba/results.

To evaluate the model using Frechet Inception Distance (FID), Inception Score (IS), and GAN-test, run the following command:

$ ./run celeba eval 5 0

The following commands trains an external classifier using the synthetic images generated by MatchGAN and then evaluates GAN-train.

$ ./run celeba synth 5 0
$ ./run celeba synth_test 5 0

5. Pretrained model

Pretrained models of MatchGAN (generator only) can be downloaded from this link. To test or evaluate these models, the checkpoint file 200000-G.ckpt should be placed under the directory ./matchgan_celeba/models (for CelebA) or ./matchgan_rafd/models (for RaFD) before running the relevant commands detailed above.

6. Results

Here are some of the results of our pre-trained model from the previous section.

FID

Percentage of training data labelled 1% 5% 10% 20% 50% 100%
CelebA 12.31 9.34 8.81 6.34 - 5.58
RaFD - - 22.75 9.94 6.65 5.06

IS

Percentage of training data labelled 1% 5% 10% 20% 50% 100%
CelebA 2.95 2.95 2.99 3.03 - 3.07
RaFD - - 1.64 1.61 1.59 1.58

GAN-train and GAN-test

These numbers are obtained under the 100% setup.

GAN-train GAN-test
CelebA 87.43% 82.26%
RaFD 97.78% 75.95%
Owner
Justin Sun
PhD student
Justin Sun
Repository of 3D Object Detection with Pointformer (CVPR2021)

3D Object Detection with Pointformer This repository contains the code for the paper 3D Object Detection with Pointformer (CVPR 2021) [arXiv]. This wo

Zhuofan Xia 117 Jan 06, 2023
pytorch implementation of Attention is all you need

A Pytorch Implementation of the Transformer: Attention Is All You Need Our implementation is largely based on Tensorflow implementation Requirements N

230 Dec 07, 2022
This repo is the official implementation for Multi-Scale Adaptive Graph Neural Network for Multivariate Time Series Forecasting

1 MAGNN This repo is the official implementation for Multi-Scale Adaptive Graph Neural Network for Multivariate Time Series Forecasting. 1.1 The frame

SZJ 12 Nov 08, 2022
Black-Box-Tuning - Black-Box Tuning for Language-Model-as-a-Service

Black-Box-Tuning Source code for paper "Black-Box Tuning for Language-Model-as-a-Service". Being busy recently, the code in this repo and this tutoria

Tianxiang Sun 149 Jan 04, 2023
HODEmu, is both an executable and a python library that is based on Ragagnin 2021 in prep.

HODEmu HODEmu, is both an executable and a python library that is based on Ragagnin 2021 in prep. and emulates satellite abundance as a function of co

Antonio Ragagnin 1 Oct 13, 2021
PyTorch-based framework for Deep Hedging

PFHedge: Deep Hedging in PyTorch PFHedge is a PyTorch-based framework for Deep Hedging. PFHedge Documentation Neural Network Architecture for Efficien

139 Dec 30, 2022
Out-of-Town Recommendation with Travel Intention Modeling (AAAI2021)

TrainOR_AAAI21 This is the official implementation of our AAAI'21 paper: Haoran Xin, Xinjiang Lu, Tong Xu, Hao Liu, Jingjing Gu, Dejing Dou, Hui Xiong

Jack Xin 13 Oct 19, 2022
An official repository for Paper "Uformer: A General U-Shaped Transformer for Image Restoration".

Uformer: A General U-Shaped Transformer for Image Restoration Zhendong Wang, Xiaodong Cun, Jianmin Bao and Jianzhuang Liu Paper: https://arxiv.org/abs

Zhendong Wang 497 Dec 22, 2022
Image Captioning using CNN ,LSTM and Attention

Image Captioning using CNN ,LSTM and Attention This is a deeplearning model which tries to summarize an image into a text . Installation Install this

ASUTOSH GHANTO 1 Dec 16, 2021
MAU: A Motion-Aware Unit for Video Prediction and Beyond, NeurIPS2021

MAU (NeurIPS2021) Zheng Chang, Xinfeng Zhang, Shanshe Wang, Siwei Ma, Yan Ye, Xinguang Xiang, Wen GAo. Official PyTorch Code for "MAU: A Motion-Aware

ZhengChang 20 Nov 25, 2022
TOOD: Task-aligned One-stage Object Detection, ICCV2021 Oral

One-stage object detection is commonly implemented by optimizing two sub-tasks: object classification and localization, using heads with two parallel branches, which might lead to a certain level of

264 Jan 09, 2023
(JMLR'19) A Python Toolbox for Scalable Outlier Detection (Anomaly Detection)

Python Outlier Detection (PyOD) Deployment & Documentation & Stats Build Status & Coverage & Maintainability & License PyOD is a comprehensive and sca

Yue Zhao 6.6k Jan 03, 2023
ML for NLP and Computer Vision.

Sparrow is our open-source ML product. It runs on Skipper MLOps infrastructure.

Katana ML 2 Nov 28, 2021
PyTorch implementation(s) of various ResNet models from Twitch streams.

pytorch-resnet-twitch PyTorch implementation(s) of various ResNet models from Twitch streams. Status: ResNet50 currently not working. Will update in n

Daniel Bourke 3 Jan 11, 2022
Supporting code for short YouTube series Neural Networks Demystified.

Neural Networks Demystified Supporting iPython notebooks for the YouTube Series Neural Networks Demystified. I've included formulas, code, and the tex

Stephen 1.3k Dec 23, 2022
Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth [Paper]

Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth [Paper] Downloads [Downloads] Trained ckpt files for NYU Depth V2 and

98 Jan 01, 2023
PlaidML is a framework for making deep learning work everywhere.

A platform for making deep learning work everywhere. Documentation | Installation Instructions | Building PlaidML | Contributing | Troubleshooting | R

PlaidML 4.5k Jan 02, 2023
Learning Optical Flow from a Few Matches (CVPR 2021)

Learning Optical Flow from a Few Matches This repository contains the source code for our paper: Learning Optical Flow from a Few Matches CVPR 2021 Sh

Shihao Jiang (Zac) 159 Dec 16, 2022
My coursework for Machine Learning (2021 Spring) at National Taiwan University (NTU)

Machine Learning 2021 Machine Learning (NTU EE 5184, Spring 2021) Instructor: Hung-yi Lee Course Website : (https://speech.ee.ntu.edu.tw/~hylee/ml/202

100 Dec 26, 2022
Implementation of "Deep Implicit Templates for 3D Shape Representation"

Deep Implicit Templates for 3D Shape Representation Zerong Zheng, Tao Yu, Qionghai Dai, Yebin Liu. arXiv 2020. This repository is an implementation fo

Zerong Zheng 144 Dec 07, 2022