State of the Art Neural Networks for Generative Deep Learning

Overview

pyradox-generative

State of the Art Neural Networks for Generative Deep Learning

Downloads Downloads Downloads


Table of Contents


Installation

pip install pyradox-generative

Usage

This library provides light weight trainers for the following generative models:

Vanilla GAN

Just provide your genrator and discriminator and train your GAN

Data Preparation:

from pyradox_generative import GAN
import numpy as np
import tensorflow as tf
import tensorflow.keras as keras

(x_train, y_train), _ = keras.datasets.mnist.load_data()
x_train = x_train.astype(np.float32) / 255
x_train = x_train.reshape(-1, 28, 28, 1) * 2.0 - 1.0

dataset = tf.data.Dataset.from_tensor_slices(x_train)
dataset = dataset.shuffle(1024)
dataset = dataset.batch(32, drop_remainder=True).prefetch(1)

Define the generator and discriminator models:

generator = keras.models.Sequential(
    [
        keras.Input(shape=[28]),
        keras.layers.Dense(7 * 7 * 3),
        keras.layers.Reshape([7, 7, 3]),
        keras.layers.BatchNormalization(),
        keras.layers.Conv2DTranspose(
            32, kernel_size=3, strides=2, padding="same", activation="selu"
        ),
        keras.layers.Conv2DTranspose(
            1, kernel_size=3, strides=2, padding="same", activation="tanh"
        ),
    ],
    name="generator",
)

discriminator = keras.models.Sequential(
    [
        keras.layers.Conv2D(
            32,
            kernel_size=3,
            strides=2,
            padding="same",
            activation=keras.layers.LeakyReLU(0.2),
            input_shape=[28, 28, 1],
        ),
        keras.layers.Conv2D(
            3,
            kernel_size=3,
            strides=2,
            padding="same",
            activation=keras.layers.LeakyReLU(0.2),
        ),
        keras.layers.Flatten(),
        keras.layers.Dense(1, activation="sigmoid"),
    ],
    name="discriminator",
)

Plug in the models to the trainer class and train them using the very familiar compile and fit methods:

gan = GAN(discriminator=discriminator, generator=generator, latent_dim=28)
gan.compile(
    d_optimizer=keras.optimizers.Adam(learning_rate=0.0001),
    g_optimizer=keras.optimizers.Adam(learning_rate=0.0001),
    loss_fn=keras.losses.BinaryCrossentropy(),
)

history = gan.fit(dataset)

Conditional GAN

Just provide your genrator and discriminator and train your GAN

Data Preparation and calculate the input and output dimensions of generator and discriminator:

from pyradox_generative import ConditionalGAN
import numpy as np
import tensorflow as tf
import tensorflow.keras as keras

CODINGS_SIZE = 28
N_CHANNELS = 1
N_CLASSES = 10
G_INP_CHANNELS = CODINGS_SIZE + N_CLASSES
D_INP_CHANNELS = N_CHANNELS + N_CLASSES

(x_train, y_train), _ = keras.datasets.mnist.load_data()
x_train = x_train
x_train = x_train.astype(np.float32) / 255
x_train = x_train.reshape(-1, 28, 28, 1) * 2.0 - 1.0
y_train = y_train
y_train = keras.utils.to_categorical(y_train, 10)

dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
dataset = dataset.shuffle(1024)
dataset = dataset.batch(32, drop_remainder=True).prefetch(1)

Define the generator and discriminator models:

generator = keras.models.Sequential(
    [
        keras.Input(shape=[G_INP_CHANNELS]),
        keras.layers.Dense(7 * 7 * 3),
        keras.layers.Reshape([7, 7, 3]),
        keras.layers.BatchNormalization(),
        keras.layers.Conv2DTranspose(
            32, kernel_size=3, strides=2, padding="same", activation="selu"
        ),
        keras.layers.Conv2DTranspose(
            1, kernel_size=3, strides=2, padding="same", activation="tanh"
        ),
    ],
    name="generator",
)

discriminator = keras.models.Sequential(
    [
        keras.layers.Conv2D(
            32,
            kernel_size=3,
            strides=2,
            padding="same",
            activation=keras.layers.LeakyReLU(0.2),
            input_shape=[28, 28, D_INP_CHANNELS],
        ),
        keras.layers.Conv2D(
            3,
            kernel_size=3,
            strides=2,
            padding="same",
            activation=keras.layers.LeakyReLU(0.2),
        ),
        keras.layers.Flatten(),
        keras.layers.Dense(1, activation="sigmoid"),
    ],
    name="discriminator",
)

Plug in the models to the trainer class and train them using the very familiar compile and fit methods:

gan = ConditionalGAN(
    discriminator=discriminator, generator=generator, latent_dim=CODINGS_SIZE
)
gan.compile(
    d_optimizer=keras.optimizers.Adam(learning_rate=0.0001),
    g_optimizer=keras.optimizers.Adam(learning_rate=0.0001),
    loss_fn=keras.losses.BinaryCrossentropy(),
)

history = gan.fit(dataset)

Wasserstein GAN

Just provide your genrator and discriminator and train your GAN

Data Preparation:

from pyradox_generative import WGANGP
import numpy as np
import tensorflow as tf
import tensorflow.keras as keras

(x_train, y_train), _ = keras.datasets.mnist.load_data()
x_train = x_train.astype(np.float32) / 255
x_train = x_train.reshape(-1, 28, 28, 1) * 2.0 - 1.0

dataset = tf.data.Dataset.from_tensor_slices(x_train)
dataset = dataset.shuffle(1024)
dataset = dataset.batch(32, drop_remainder=True).prefetch(1)

Define the generator and discriminator models:

generator = keras.models.Sequential(
    [
        keras.Input(shape=[28]),
        keras.layers.Dense(7 * 7 * 3),
        keras.layers.Reshape([7, 7, 3]),
        keras.layers.BatchNormalization(),
        keras.layers.Conv2DTranspose(
            32, kernel_size=3, strides=2, padding="same", activation="selu"
        ),
        keras.layers.Conv2DTranspose(
            1, kernel_size=3, strides=2, padding="same", activation="tanh"
        ),
    ],
    name="generator",
)

discriminator = keras.models.Sequential(
    [
        keras.layers.Conv2D(
            32,
            kernel_size=3,
            strides=2,
            padding="same",
            activation=keras.layers.LeakyReLU(0.2),
            input_shape=[28, 28, 1],
        ),
        keras.layers.Conv2D(
            3,
            kernel_size=3,
            strides=2,
            padding="same",
            activation=keras.layers.LeakyReLU(0.2),
        ),
        keras.layers.Flatten(),
        keras.layers.Dense(1, activation="sigmoid"),
    ],
    name="discriminator",
)

Plug in the models to the trainer class and train them using the very familiar compile and fit methods:

gan = WGANGP(
    discriminator=discriminator,
    generator=generator,
    latent_dim=28,
    discriminator_extra_steps=1,
    gp_weight=10,
)
gan.compile(
    d_optimizer=keras.optimizers.Adam(learning_rate=0.0001),
    g_optimizer=keras.optimizers.Adam(learning_rate=0.0001),
)

history = gan.fit(dataset)

Variational Auto Encoder

Just provide your encoder and decoder and train your VAE Sampling is done internally

Data Preparation:

from pyradox_generative import VAE
import numpy as np
import tensorflow as tf
import tensorflow.keras as keras

(x_train, y_train), _ = keras.datasets.mnist.load_data()
x_train = x_train.astype(np.float32) / 255
x_train = x_train.reshape(-1, 28, 28, 1) * 2.0 - 1.0

dataset = tf.data.Dataset.from_tensor_slices(x_train)
dataset = dataset.shuffle(1024)
dataset = dataset.batch(32, drop_remainder=True).prefetch(1)

Define the encoder and decoder models:

encoder = keras.models.Sequential(
    [
        keras.Input(shape=(28, 28, 1)),
        keras.layers.Conv2D(32, 3, activation="relu", strides=2, padding="same"),
        keras.layers.Conv2D(64, 3, activation="relu", strides=2, padding="same"),
        keras.layers.Flatten(),
        keras.layers.Dense(16, activation="relu"),
    ],
    name="encoder",
)

decoder = keras.models.Sequential(
    [
        keras.Input(shape=(28,)),
        keras.layers.Dense(7 * 7 * 64, activation="relu"),
        keras.layers.Reshape((7, 7, 64)),
        keras.layers.Conv2DTranspose(64, 3, activation="relu", strides=2, padding="same"),
        keras.layers.Conv2DTranspose(32, 3, activation="relu", strides=2, padding="same"),
        keras.layers.Conv2DTranspose(1, 3, activation="sigmoid", padding="same"),
    ],
    name="decoder",
)

Plug in the models to the trainer class and train them using the very familiar compile and fit methods:

vae = VAE(encoder=encoder, decoder=decoder, latent_dim=28)
vae.compile(keras.optimizers.Adam(learning_rate=0.001))
history = vae.fit(dataset)

Style GAN

Just provide your genrator and discriminator models and train your GAN

Data Preparation:

from pyradox_generative import StyleGAN
import numpy as np
import tensorflow as tf
from functools import partial

def resize_image(res, image):
    # only donwsampling, so use nearest neighbor that is faster to run
    image = tf.image.resize(
        image, (res, res), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR
    )
    image = tf.cast(image, tf.float32) / 127.5 - 1.0
    return image


def create_dataloader(res):
    (x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
    x_train = x_train[:100, :, :]
    x_train = np.pad(x_train, [(0, 0), (2, 2), (2, 2)], mode="constant")
    x_train = tf.image.grayscale_to_rgb(tf.expand_dims(x_train, axis=3), name=None)
    x_train = tf.data.Dataset.from_tensor_slices(x_train)

    batch_size = 32
    dl = x_train.map(partial(resize_image, res), num_parallel_calls=tf.data.AUTOTUNE)
    dl = dl.shuffle(200).batch(batch_size, drop_remainder=True).prefetch(1).repeat()
    return dl

Define the model by providing number of filters for each each resolution (log 2):

gan = StyleGAN(
    target_res=32,
    start_res=4,
    filter_nums={0: 32, 1: 32, 2: 32, 3: 32, 4: 32, 5: 32},
)
opt_cfg = {"learning_rate": 1e-3, "beta_1": 0.0, "beta_2": 0.99, "epsilon": 1e-8}

start_res_log2 = 2
target_res_log2 = 5

Train the Style GAN:

for res_log2 in range(start_res_log2, target_res_log2 + 1):
    res = 2 ** res_log2
    for phase in ["TRANSITION", "STABLE"]:
        if res == 4 and phase == "TRANSITION":
            continue

        train_dl = create_dataloader(res)

        steps = 10

        gan.compile(
            d_optimizer=tf.keras.optimizers.Adam(**opt_cfg),
            g_optimizer=tf.keras.optimizers.Adam(**opt_cfg),
            loss_weights={"gradient_penalty": 10, "drift": 0.001},
            steps_per_epoch=steps,
            res=res,
            phase=phase,
            run_eagerly=False,
        )

        print(phase)
        history = gan.fit(train_dl, epochs=1, steps_per_epoch=steps)

Cycle GAN

Just provide your genrator and discriminator models and train your GAN

Data Preparation:

import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow import keras
from pyradox_generative import CycleGAN

tfds.disable_progress_bar()
autotune = tf.data.AUTOTUNE
orig_img_size = (286, 286)
input_img_size = (256, 256, 3)


def normalize_img(img):
    img = tf.cast(img, dtype=tf.float32)
    return (img / 127.5) - 1.0


def preprocess_train_image(img, label):
    img = tf.image.random_flip_left_right(img)
    img = tf.image.resize(img, [*orig_img_size])
    img = tf.image.random_crop(img, size=[*input_img_size])
    img = normalize_img(img)
    return img


def preprocess_test_image(img, label):
    img = tf.image.resize(img, [input_img_size[0], input_img_size[1]])
    img = normalize_img(img)
    return img

train_horses, _ = tfds.load(
    "cycle_gan/horse2zebra", with_info=True, as_supervised=True, split="trainA[:5%]"
)
train_zebras, _ = tfds.load(
    "cycle_gan/horse2zebra", with_info=True, as_supervised=True, split="trainB[:5%]"
)

buffer_size = 256
batch_size = 1

train_horses = (
    train_horses.map(preprocess_train_image, num_parallel_calls=autotune)
    .cache()
    .shuffle(buffer_size)
    .batch(batch_size)
)
train_zebras = (
    train_zebras.map(preprocess_train_image, num_parallel_calls=autotune)
    .cache()
    .shuffle(buffer_size)
    .batch(batch_size)
)

Define the generator and discriminator models:

def build_generator(name):
    return keras.models.Sequential(
        [
            keras.layers.Input(shape=input_img_size),
            keras.layers.Conv2D(32, 3, activation="relu", padding="same"),
            keras.layers.Conv2D(32, 3, activation="relu", padding="same"),
            keras.layers.Conv2D(3, 3, activation="tanh", padding="same"),
        ],
        name=name,
    )


def build_discriminator(name):
    return keras.models.Sequential(
        [
            keras.layers.Input(shape=input_img_size),
            keras.layers.Conv2D(32, 3, activation="relu", padding="same"),
            keras.layers.MaxPooling2D(pool_size=2, strides=2),
            keras.layers.Conv2D(32, 3, activation="relu", padding="same"),
            keras.layers.MaxPooling2D(pool_size=2, strides=2),
            keras.layers.Conv2D(32, 3, activation="relu", padding="same"),
            keras.layers.MaxPooling2D(pool_size=2, strides=2),
            keras.layers.Conv2D(1, 3, activation="relu", padding="same"),
        ],
        name=name,
    )

Plug in the models to the trainer class and train them using the very familiar compile and fit methods:

gan = CycleGAN(
    generator_g=build_generator("gen_G"),
    generator_f=build_generator("gen_F"),
    discriminator_x=build_discriminator("disc_X"),
    discriminator_y=build_discriminator("disc_Y"),
)

gan.compile(
    gen_g_optimizer=keras.optimizers.Adam(learning_rate=2e-4, beta_1=0.5),
    gen_f_optimizer=keras.optimizers.Adam(learning_rate=2e-4, beta_1=0.5),
    disc_x_optimizer=keras.optimizers.Adam(learning_rate=2e-4, beta_1=0.5),
    disc_y_optimizer=keras.optimizers.Adam(learning_rate=2e-4, beta_1=0.5),
)

history = gan.fit(
    tf.data.Dataset.zip((train_horses, train_zebras)),
)

References

Owner
Ritvik Rastogi
I have been writing code since 2016, and taught myself a handful of skills and programming languages. I love solving problems by writing code
Ritvik Rastogi
Asterisk is a framework to generate high-quality training datasets at scale

Asterisk is a framework to generate high-quality training datasets at scale

Mona Nashaat 44 Apr 25, 2022
Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape Laplacian (CVPR 2022)

Pop-Out Motion Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape Laplacian (CVPR 2022) Jihyun Lee*, Minhyuk Sung*, Hyunjin Kim, Tae-Ky

Jihyun Lee 88 Nov 22, 2022
Voxel-based Network for Shape Completion by Leveraging Edge Generation (ICCV 2021, oral)

Voxel-based Network for Shape Completion by Leveraging Edge Generation This is the PyTorch implementation for the paper "Voxel-based Network for Shape

10 Dec 04, 2022
Recognize numbers from an (28 x 28) image using neural networks

Number recognition Recognize numbers from a 28 x 28 image using neural networks Usage This is an example of a simple usage of number-recognition NOTE:

Mauro Baladés 2 Dec 29, 2021
(Arxiv 2021) NeRF--: Neural Radiance Fields Without Known Camera Parameters

NeRF--: Neural Radiance Fields Without Known Camera Parameters Project Page | Arxiv | Colab Notebook | Data Zirui Wang¹, Shangzhe Wu², Weidi Xie², Min

Active Vision Laboratory 411 Dec 26, 2022
Unsupervised phone and word segmentation using dynamic programming on self-supervised VQ features.

Unsupervised Phone and Word Segmentation using Vector-Quantized Neural Networks Overview Unsupervised phone and word segmentation on speech data is pe

Herman Kamper 13 Dec 11, 2022
Converting CPT to bert form for use

cpt-encoder 将CPT转成bert形式使用 说明 刚刚刷到又出了一种模型:CPT,看论文显示,在很多中文任务上性能比mac bert还好,就迫不及待想把它用起来。 根据对源码的研究,发现该模型在做nlu建模时主要用的encoder部分,也就是bert,因此我将这部分权重转为bert权重类型

黄辉 1 Oct 14, 2021
Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks

Adversarially-Robust-Periphery Code + Data from the paper "Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks" by A

Anne Harrington 2 Feb 07, 2022
Realtime micro-expression recognition using OpenCV and PyTorch

Micro-expression Recognition Realtime micro-expression recognition from scratch using OpenCV and PyTorch Try it out with a webcam or video using the e

Irfan 35 Dec 05, 2022
Facilitates implementing deep neural-network backbones, data augmentations

Introduction Nowadays, the training of Deep Learning models is fragmented and unified. When AI engineers face up with one specific task, the common wa

40 Dec 29, 2022
My personal code and solution to the Synacor Challenge from 2012 OSCON.

Synacor OSCON Challenge Solution (2012) This repository contains my code and solution to solve the Synacor OSCON 2012 Challenge. If you are interested

2 Mar 20, 2022
CATE: Computation-aware Neural Architecture Encoding with Transformers

CATE: Computation-aware Neural Architecture Encoding with Transformers Code for paper: CATE: Computation-aware Neural Architecture Encoding with Trans

16 Dec 27, 2022
Convolutional Neural Network for Text Classification in Tensorflow

This code belongs to the "Implementing a CNN for Text Classification in Tensorflow" blog post. It is slightly simplified implementation of Kim's Convo

Denny Britz 5.5k Jan 02, 2023
Deep Reinforcement Learning based Trading Agent for Bitcoin

Deep Trading Agent Deep Reinforcement Learning based Trading Agent for Bitcoin using DeepSense Network for Q function approximation. For complete deta

Kartikay Garg 669 Dec 29, 2022
Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation

Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation Official PyTorch implementation for the paper Look

Rishabh Jangir 20 Nov 24, 2022
This is an official implementation for "AS-MLP: An Axial Shifted MLP Architecture for Vision".

AS-MLP architecture for Image Classification Model Zoo Image Classification on ImageNet-1K Network Resolution Top-1 (%) Params FLOPs Throughput (image

SVIP Lab 106 Dec 12, 2022
Code for WSDM 2022 paper, Contrastive Learning for Representation Degeneration Problem in Sequential Recommendation.

DuoRec Code for WSDM 2022 paper, Contrastive Learning for Representation Degeneration Problem in Sequential Recommendation. Usage Download datasets fr

Qrh 46 Dec 19, 2022
Annotate with anyone, anywhere.

h h is the web app that serves most of the https://hypothes.is/ website, including the web annotations API at https://hypothes.is/api/. The Hypothesis

Hypothesis 2.6k Jan 08, 2023
Code for Blind Image Decomposition (BID) and Blind Image Decomposition network (BIDeN).

arXiv, porject page, paper Blind Image Decomposition (BID) Blind Image Decomposition is a novel task. The task requires separating a superimposed imag

64 Dec 20, 2022
StellarGraph - Machine Learning on Graphs

StellarGraph Machine Learning Library StellarGraph is a Python library for machine learning on graphs and networks. Table of Contents Introduction Get

S T E L L A R 2.6k Jan 05, 2023