PyTorch inference for "Progressive Growing of GANs" with CelebA snapshot

Overview

Progressive Growing of GANs inference in PyTorch with CelebA training snapshot

Description

This is an inference sample written in PyTorch of the original Theano/Lasagne code.

I recreated the network as described in the paper of Karras et al. Since some layers seemed to be missing in PyTorch, these were implemented as well. The network and the layers can be found in model.py.

For the demo, a 100-celeb-hq-1024x1024-ours snapshot was used, which was made publicly available by the authors. Since I couldn't find any model converter between Theano/Lasagne and PyTorch, I used a quick and dirty script to transfer the weights between the models (transfer_weights.py).

This repo does not provide the code for training the networks.

Simple inference

To run the demo, simply execute predict.py. You can specify other weights with the --weights flag.

Example image:

Example image

Latent space interpolation

To try the latent space interpolation, use latent_interp.py. All output images will be saved in ./interp.

You can chose between the "gaussian interpolation" introduced in the original paper and the "slerp interpolation" introduced by Tom White in his paper Sampling Generative Networks using the --type argument.

Use --filter to change the gaussian filter size for the gaussian interpolation and --interp for the interpolation steps for the slerp interpolation.

The following arguments are defined:

  • --weights - path to pretrained PyTorch state dict
  • --output - Directory for storing interpolated images
  • --batch_size - batch size for DataLoader
  • --num_workers - number of workers for DataLoader
  • --type {gauss, slerp} - interpolation type
  • --nb_latents - number of latent vectors to generate
  • --filter - gaussian filter length for interpolating latent space (gauss interpolation)
  • --interp - interpolation length between each latent vector (slerp interpolation)
  • --seed - random seed for numpy and PyTorch
  • --cuda - use GPU

The total number of generated frames depends on the used interpolation technique.

For gaussian interpolation the number of generated frames equals nb_latents, while the slerp interpolation generates nb_latents * interp frames.

Example interpolation:

Example interpolation

Live latent space interpolation

A live demo of the latent space interpolation using PyGame can be seen in pygame_interp_demo.py.

Use the --size argument to change the output window size.

The following arguments are defined:

  • --weights - path to pretrained PyTorch state dict
  • --num_workers - number of workers for DataLoader
  • --type {gauss, slerp} - interpolation type
  • --nb_latents - number of latent vectors to generate
  • --filter - gaussian filter length for interpolating latent space (gauss interpolation)
  • --interp - interpolation length between each latent vector (slerp interpolation)
  • --size - PyGame window size
  • --seed - random seed for numpy and PyTorch
  • --cuda - use GPU

Transferring weights

The pretrained lasagne weights can be transferred to a PyTorch state dict using transfer_weights.py.

To transfer other snapshots from the paper (other than CelebA), you have to modify the model architecture accordingly and use the corresponding weights.

Environment

The code was tested on Ubuntu 16.04 with an NVIDIA GTX 1080 using PyTorch v.0.2.0_4.

  • transfer_weights.py needs Theano and Lasagne to load the pretrained weights.
  • pygame_interp_demo.py needs PyGame to visualize the output

A single forward pass took approx. 0.031 seconds.

Links

License

This code is a modified form of the original code under the CC BY-NC license with the following copyright notice:

# Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
#
# This work is licensed under the Creative Commons Attribution-NonCommercial
# 4.0 International License. To view a copy of this license, visit
# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to
# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.

According the Section 3, I hereby identify Tero Karras et al. and NVIDIA as the original authors of the material.

Owner
Deep Learning Frameworks @NVIDIA
This is the official code for the paper "Ad2Attack: Adaptive Adversarial Attack for Real-Time UAV Tracking".

Ad^2Attack:Adaptive Adversarial Attack on Real-Time UAV Tracking Demo video 📹 Our video on bilibili demonstrates the test results of Ad^2Attack on se

Intelligent Vision for Robotics in Complex Environment 10 Nov 07, 2022
FSL-Mate: A collection of resources for few-shot learning (FSL).

FSL-Mate is a collection of resources for few-shot learning (FSL). In particular, FSL-Mate currently contains FewShotPapers: a paper list which tracks

Yaqing Wang 1.5k Jan 08, 2023
Unofficial pytorch implementation of the paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution"

DFSA Unofficial pytorch implementation of the ICCV 2021 paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution" (p

2 Nov 15, 2021
Like ThreeJS but for Python and based on wgpu

pygfx A render engine, inspired by ThreeJS, but for Python and targeting Vulkan/Metal/DX12 (via wgpu). Introduction This is a Python render engine bui

139 Jan 07, 2023
codes for Image Inpainting with External-internal Learning and Monochromic Bottleneck

Image Inpainting with External-internal Learning and Monochromic Bottleneck This repository is for the CVPR 2021 paper: 'Image Inpainting with Externa

97 Nov 29, 2022
A PyTorch implementation of DenseNet.

A PyTorch Implementation of DenseNet This is a PyTorch implementation of the DenseNet-BC architecture as described in the paper Densely Connected Conv

Brandon Amos 771 Dec 15, 2022
A TikTok-like recommender system for GitHub repositories based on Gorse

GitRec GitRec is the missing recommender system for GitHub repositories based on Gorse. Architecture The trending crawler crawls trending repositories

337 Jan 04, 2023
Pytorch implementation of our paper LIMUSE: LIGHTWEIGHT MULTI-MODAL SPEAKER EXTRACTION.

LiMuSE Overview Pytorch implementation of our paper LIMUSE: LIGHTWEIGHT MULTI-MODAL SPEAKER EXTRACTION. LiMuSE explores group communication on a multi

Auditory Model and Cognitive Computing Lab 17 Oct 26, 2022
Code for EMNLP 2021 main conference paper "Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification"

Text-AutoAugment (TAA) This repository contains the code for our paper Text AutoAugment: Learning Compositional Augmentation Policy for Text Classific

LancoPKU 105 Jan 03, 2023
Learning hidden low dimensional dyanmics using a Generalized Onsager Principle and neural networks

OnsagerNet Learning hidden low dimensional dyanmics using a Generalized Onsager Principle and neural networks This is the original pyTorch implemenati

Haijun.Yu 3 Aug 24, 2022
Duke Machine Learning Winter School: Computer Vision 2022

mlwscv2002 Welcome to the Duke Machine Learning Winter School: Computer Vision 2022! The MLWS-CV includes 3 hands-on training sessions on implementing

Duke + Data Science (+DS) 9 May 25, 2022
PyTorch implementation of "VRT: A Video Restoration Transformer"

VRT: A Video Restoration Transformer Jingyun Liang, Jiezhang Cao, Yuchen Fan, Kai Zhang, Rakesh Ranjan, Yawei Li, Radu Timofte, Luc Van Gool Computer

Jingyun Liang 837 Jan 09, 2023
This repository contains pre-trained models and some evaluation code for our paper Towards Unsupervised Dense Information Retrieval with Contrastive Learning

Contriever: Towards Unsupervised Dense Information Retrieval with Contrastive Learning This repository contains pre-trained models and some evaluation

Meta Research 207 Jan 08, 2023
This is a TensorFlow implementation for C2-Rec

This is a TensorFlow implementation for C2-Rec We refer to the repo SASRec. Requirements requirement.txt Datasets This repo includes Amazon Beauty dat

7 Nov 14, 2022
A small library of 3D related utilities used in my research.

utils3D A small library of 3D related utilities used in my research. Installation Install via GitHub pip install git+https://github.com/Steve-Tod/util

Zhenyu Jiang 8 May 20, 2022
Pytorch tutorials for Neural Style transfert

PyTorch Tutorials This tutorial is no longer maintained. Please use the official version: https://pytorch.org/tutorials/advanced/neural_style_tutorial

Alexis David Jacq 135 Jun 26, 2022
Lacmus is a cross-platform application that helps to find people who are lost in the forest using computer vision and neural networks.

lacmus The program for searching through photos from the air of lost people in the forest using Retina Net neural nwtwork. The project is being develo

Lacmus Foundation 168 Dec 27, 2022
SOTR: Segmenting Objects with Transformers [ICCV 2021]

SOTR: Segmenting Objects with Transformers [ICCV 2021] By Ruohao Guo, Dantong Niu, Liao Qu, Zhenbo Li Introduction This is the official implementation

186 Dec 20, 2022
Bald-to-Hairy Translation Using CycleGAN

GANiry: Bald-to-Hairy Translation Using CycleGAN Official PyTorch implementation of GANiry. GANiry: Bald-to-Hairy Translation Using CycleGAN, Fidan Sa

Fidan Samet 10 Oct 27, 2022
Using image super resolution models with vapoursynth and speeding them up with TensorRT

vs-RealEsrganAnime-tensorrt-docker Using image super resolution models with vapoursynth and speeding them up with TensorRT. Also a docker image since

4 Aug 23, 2022