The Simplest DCGAN Implementation

Overview

DCGAN in TensorLayer

This is the TensorLayer implementation of Deep Convolutional Generative Adversarial Networks. Looking for Text to Image Synthesis ? click here

alt tag

  • 🆕 🔥 2019 May: We just update this project to support TF2 and TL2. Enjoy!
  • 🆕 🔥 2019 May: This project is chosen as the default template of TL projects.

Prerequisites

  • Python3.5 3.6
  • TensorFlow==2.0.0a0 pip3 install tensorflow-gpu==2.0.0a0
  • TensorLayer=2.1.0 pip3 install tensorlayer==2.1.0

Usage

First, download the aligned face images from google or baidu to a data folder.

Second, train the GAN:

$ python train.py

Result on celebA

Comments
  • ValueError: Trying to share variable discriminator/d/h4/lin_sigmoid/W

    ValueError: Trying to share variable discriminator/d/h4/lin_sigmoid/W

    hi, I use: Tensorflow V 1.10 CPU version Tensorlayer V 1.10.1 python 3.6 windows 10

    after run : python main.py I get ValueError

    Traceback (most recent call last): File "c:/Users/USER/Desktop/פרוייקט תואר שני/playgroung/dcgan/main.py", line 166, in tf.app.run() File "C:\Python\3.6\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run _sys.exit(main(argv)) File "c:/Users/USER/Desktop/פרוייקט תואר שני/playgroung/dcgan/main.py", line 59, in main _, d2_logits = discriminator(real_images, is_train=True, reuse=True) File "c:\Users\USER\Desktop\פרוייקט תואר שני\playgroung\dcgan\model.py", line 81, in discriminator W_init = w_init, name='d/h4/lin_sigmoid') File "c:\Users\USER\Desktop\פרוייקט תואר שני\playgroung\dcgan\tensorlayer\layers\core.py", line 926, in init W = tf.get_variable(name='W', shape=(n_in, n_units), initializer=W_init, dtype=LayersConfig.tf_dtype, **W_init_args) File "C:\Python\3.6\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1467, in get_variable aggregation=aggregation) File "C:\Python\3.6\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1217, in get_variable aggregation=aggregation) File "C:\Python\3.6\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 527, in get_variable aggregation=aggregation) File "C:\Python\3.6\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 481, in _true_getter aggregation=aggregation) File "C:\Python\3.6\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 853, in _get_single_variable found_var.get_shape())) ValueError: Trying to share variable discriminator/d/h4/lin_sigmoid/W, but specified shape (8192, 1) and found shape (2048, 1).

    opened by motkeg 2
  • Error during download

    Error during download

    Tried twice, same error.

    python download.py celebA
    ./data/img_align_celeba.zip: 25.5kB [12:15, 34.7B/s]Traceback (most recent call last):
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/site-packages/urllib3/response.py", line 302, in _error_catcher
        yield
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/site-packages/urllib3/response.py", line 601, in read_chunked
        chunk = self._handle_chunk(amt)
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/site-packages/urllib3/response.py", line 557, in _handle_chunk
        value = self._fp._safe_read(amt)
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/http/client.py", line 614, in _safe_read
        raise IncompleteRead(b''.join(s), amt)
    http.client.IncompleteRead: IncompleteRead(5887 bytes read, 26881 more expected)
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/site-packages/requests/models.py", line 745, in generate
        for chunk in self.raw.stream(chunk_size, decode_content=True):
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/site-packages/urllib3/response.py", line 432, in stream
        for line in self.read_chunked(amt, decode_content=decode_content):
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/site-packages/urllib3/response.py", line 626, in read_chunked
        self._original_response.close()
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/contextlib.py", line 99, in __exit__
        self.gen.throw(type, value, traceback)
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/site-packages/urllib3/response.py", line 320, in _error_catcher
        raise ProtocolError('Connection broken: %r' % e, e)
    urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(5887 bytes read, 26881 more expected)', IncompleteRead(5887 bytes read, 26881 more expected))
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "download.py", line 164, in <module>
        download_celeb_a('./data')
      File "download.py", line 91, in download_celeb_a
        download_file_from_google_drive(drive_id, save_path)
      File "download.py", line 56, in download_file_from_google_drive
        save_response_content(response, destination)
      File "download.py", line 68, in save_response_content
        unit='B', unit_scale=True, desc=destination):
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/site-packages/tqdm/_tqdm.py", line 962, in __iter__
        for obj in iterable:
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/site-packages/requests/models.py", line 748, in generate
        raise ChunkedEncodingError(e)
    requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(5887 bytes read, 26881 more expected)', IncompleteRead(5887 bytes read, 26881 more expected))
    
    opened by arisliang 2
  • Traceback error - main.py

    Traceback error - main.py

    Trying to run main.py and I get the following error.

    traceback (most recent call last):
      File "main.py", line 23, in <module>
        flags.DEFINE_integer("train_size", np.inf, "The size of train images [np.inf]")
      File "/anaconda2/lib/python2.7/site-packages/tensorflow/python/platform/flags.py", line 58, in wrapper
        return original_function(*args, **kwargs)
      File "/anaconda2/lib/python2.7/site-packages/absl/flags/_defines.py", line 315, in DEFINE_integer
        DEFINE(parser, name, default, help, flag_values, serializer, **args)
      File "/anaconda2/lib/python2.7/site-packages/absl/flags/_defines.py", line 81, in DEFINE
        DEFINE_flag(_flag.Flag(parser, serializer, name, default, help, **args),
      File "/anaconda2/lib/python2.7/site-packages/absl/flags/_flag.py", line 107, in __init__
        self._set_default(default)
      File "/anaconda2/lib/python2.7/site-packages/absl/flags/_flag.py", line 196, in _set_default
        self.default = self._parse(value)
      File "/anaconda2/lib/python2.7/site-packages/absl/flags/_flag.py", line 169, in _parse
        'flag --%s=%s: %s' % (self.name, argument, e))
    absl.flags._exceptions.IllegalFlagValueError: flag --train_size=inf: Expect argument to be a string or int, found <type 'float'>
    

    Has anyone else encountered this?

    opened by 1140lex 2
  • Error running train.py

    Error running train.py

    Running train.py i get the following error:

    Traceback (most recent call last):
      File "train.py", line 60, in <module>
        train()
      File "train.py", line 44, in train
        grad = tape.gradient(g_loss, G.trainable_weights)
    AttributeError: 'Model' object has no attribute 'trainable_weights'
    

    am i running a wrong version of tensorflow/tensorlayer (2.0.0a0/2.0.0)?

    opened by ghost 1
  • The url for CelebA dose not work now

    The url for CelebA dose not work now

    The url for CelebA dataset in download.py dose not work now because of the dropbox problem. So It's better to change it to the url of google-drive or baidu-drive.

    opened by fangpin 1
  • Minor fixes and improvements

    Minor fixes and improvements

    • Added a mini-batch generator function.
    • Renamed generator and discriminator functions.
    • Removed logits from generator as it was redundant.
    • Changed w_init in both models to tf.glorot_normal_initializer since it has shown to improve convergence of model.
    • Using imageio instead of scipy.misc wherever possible since it is deprecated.
    opened by pskrunner14 0
  • TL compatibility and code readabality

    TL compatibility and code readabality

    • Made code compatible with TL1.10.1 and TF1.10.0.
    • Removed out size and batch_size from DeConv2D layers since they're no longer needed.
    • Replaced tl.act.lrelu with tf.nn.leaky_relu since former is deprecated.
    • Replaced tl.layers.initialize_global_variables with tf.global_variables_initializer since former is deprecated.
    • Changed flag train_size dtype from int to float due to an error.
    • Impoved code readability and removed redundant comments and code.
    • Removed pprint in favour of custom logging the flags since pprint was'nt able to access flag values.
    • Fixed wildcard imports and removed unused ones.
    • Updated main module doc.
    • Removed tensorlayer package from root dir since it is installed on system.

    Note: Fixes #9 and #12.

    opened by pskrunner14 0
  • --sample_size option causes ValueError

    --sample_size option causes ValueError

    When I pass --sample_size 256 as a flag to main.py, I get

    ValueError: Cannot feed value of shape (256, 100) for Tensor 'z_noise:0', which has shape '(64, 100)'

    When I pass --sample_size 256 and --batch_size 256, the code went into an endless loop just saying "Sample images updated!" without running each epoch.

    opened by jkraybill 0
  • Performance issue in /data.py (by P3)

    Performance issue in /data.py (by P3)

    Hello! I've found a performance issue in /data.py: ds.batch(batch_size)(here) should be called before ds.map(_map_fn, num_parallel_calls=4)(here), which could make your program more efficient.

    Here is the tensorflow document to support it.

    Besides, you need to check the function _map_fn called in ds.map(_map_fn, num_parallel_calls=4) whether to be affected or not to make the changed code work properly. For example, if _map_fn needs data with shape (x, y, z) as its input before fix, it would require data with shape (batch_size, x, y, z) after fix.

    Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

    opened by DLPerf 1
  • How to train on existing checkpoint

    How to train on existing checkpoint

    I would like to train on an existing saved checkpoint but it seems the GAN learn from scratches.

    Is there some way to load a existing checkpoint ?

    Thanks in advance.

    opened by Fqlox 0
  • local variable 'z' referenced before assignment

    local variable 'z' referenced before assignment

    I have this error! can you help me, please?

    WARNING: Logging before flag parsing goes to stderr. I0718 14:45:17.243686 140030963623808 tl_logging.py:99] [!] checkpoint exists ... I0718 14:45:17.247771 140030963623808 tl_logging.py:99] [!] samples exists ... W0718 14:45:19.318463 140030963623808 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/dataset_ops.py:505: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. Instructions for updating: tf.py_func is deprecated in TF V2. Instead, there are two options available in V2. - tf.py_function takes a python function which manipulates tf eager tensors instead of numpy arrays. It's easy to convert a tf eager tensor to an ndarray (just call tensor.numpy()) but having access to eager tensors means tf.py_functions can use accelerators such as GPUs as well as being differentiable using a gradient tape. - tf.numpy_function maintains the semantics of the deprecated tf.py_func (it is not differentiable, and manipulates numpy arrays). It drops the stateful argument making all functions stateful.

    I0718 14:45:19.464441 140030963623808 tl_logging.py:99] Input _inputlayer_1: [None, 100] I0718 14:45:19.974161 140030963623808 tl_logging.py:99] Dense dense_1: 8192 No Activation I0718 14:45:20.649731 140030963623808 tl_logging.py:99] Reshape reshape_1 I0718 14:45:20.707117 140030963623808 tl_logging.py:99] BatchNorm batchnorm_1: decay: 0.900000 epsilon: 0.000010 act: relu is_train: False I0718 14:45:20.782280 140030963623808 tl_logging.py:99] DeConv2d deconv2d_1: n_filters: 256 strides: (2, 2) padding: SAME act: No Activation dilation: (1, 1) I0718 14:45:22.796400 140030963623808 tl_logging.py:99] BatchNorm batchnorm2d_1: decay: 0.900000 epsilon: 0.000010 act: relu is_train: False I0718 14:45:22.873876 140030963623808 tl_logging.py:99] DeConv2d deconv2d_2: n_filters: 128 strides: (2, 2) padding: SAME act: No Activation dilation: (1, 1) I0718 14:45:23.076492 140030963623808 tl_logging.py:99] BatchNorm batchnorm2d_2: decay: 0.900000 epsilon: 0.000010 act: relu is_train: False I0718 14:45:23.153287 140030963623808 tl_logging.py:99] DeConv2d deconv2d_3: n_filters: 64 strides: (2, 2) padding: SAME act: No Activation dilation: (1, 1) I0718 14:45:23.227763 140030963623808 tl_logging.py:99] BatchNorm batchnorm2d_3: decay: 0.900000 epsilon: 0.000010 act: relu is_train: False I0718 14:45:23.307783 140030963623808 tl_logging.py:99] DeConv2d deconv2d_4: n_filters: 3 strides: (2, 2) padding: SAME act: tanh dilation: (1, 1) I0718 14:45:23.390201 140030963623808 tl_logging.py:99] Input _inputlayer_2: [None, 64, 64, 3] I0718 14:45:23.456669 140030963623808 tl_logging.py:99] Conv2d conv2d_1: n_filter: 64 filter_size: (5, 5) strides: (2, 2) pad: SAME act: I0718 14:45:23.530629 140030963623808 tl_logging.py:99] Conv2d conv2d_2: n_filter: 128 filter_size: (5, 5) strides: (2, 2) pad: SAME act: No Activation I0718 14:45:23.608061 140030963623808 tl_logging.py:99] BatchNorm batchnorm2d_4: decay: 0.900000 epsilon: 0.000010 act: is_train: False I0718 14:45:23.691769 140030963623808 tl_logging.py:99] Conv2d conv2d_3: n_filter: 256 filter_size: (5, 5) strides: (2, 2) pad: SAME act: No Activation I0718 14:45:23.775271 140030963623808 tl_logging.py:99] BatchNorm batchnorm2d_5: decay: 0.900000 epsilon: 0.000010 act: is_train: False I0718 14:45:23.942049 140030963623808 tl_logging.py:99] Conv2d conv2d_4: n_filter: 512 filter_size: (5, 5) strides: (2, 2) pad: SAME act: No Activation I0718 14:45:24.041154 140030963623808 tl_logging.py:99] BatchNorm batchnorm2d_6: decay: 0.900000 epsilon: 0.000010 act: is_train: False I0718 14:45:24.114703 140030963623808 tl_logging.py:99] Flatten flatten_1: I0718 14:45:24.177696 140030963623808 tl_logging.py:99] Dense dense_2: 1 identity I0718 14:45:29.721007 140030963623808 tl_logging.py:99] [*] Saving TL weights into checkpoint/G.npz I0718 14:45:31.268130 140030963623808 tl_logging.py:99] [*] Saved I0718 14:45:31.274845 140030963623808 tl_logging.py:99] [*] Saving TL weights into checkpoint/D.npz I0718 14:45:32.590787 140030963623808 tl_logging.py:99] [*] Saved

    UnboundLocalError Traceback (most recent call last) in () 58 59 if name == 'main': ---> 60 train()

    in train() 53 D.save_weights('{}/D.npz'.format(flags.checkpoint_dir), format='npz') 54 G.eval() ---> 55 result = G(z) 56 G.train() 57 tl.visualize.save_images(result.numpy(), [num_tiles, num_tiles], '{}/train_{:02d}.png'.format(flags.sample_dir, epoch))

    UnboundLocalError: local variable 'z' referenced before assignment

    opened by emnab 4
  • TypeError when run main.py

    TypeError when run main.py

    Tried both the dcgan folder version of tensorlayer (1.4.5) and installed version of tensorlayer (1.8.1). tensorflow version is 1.6.0.

    $ python main.py 
    /home/ly/anaconda3/envs/learning/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
      from ._conv import register_converters as _register_converters
    Traceback (most recent call last):
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/site-packages/absl/flags/_flag.py", line 166, in _parse
        return self.parser.parse(argument)
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/site-packages/absl/flags/_argument_parser.py", line 152, in parse
        val = self.convert(argument)
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/site-packages/absl/flags/_argument_parser.py", line 268, in convert
        type(argument)))
    TypeError: Expect argument to be a string or int, found <class 'float'>
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "main.py", line 23, in <module>
        flags.DEFINE_integer("train_size", np.inf, "The size of train images [np.inf]")
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/site-packages/tensorflow/python/platform/flags.py", line 58, in wrapper
        return original_function(*args, **kwargs)
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/site-packages/absl/flags/_defines.py", line 315, in DEFINE_integer
        DEFINE(parser, name, default, help, flag_values, serializer, **args)
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/site-packages/absl/flags/_defines.py", line 81, in DEFINE
        DEFINE_flag(_flag.Flag(parser, serializer, name, default, help, **args),
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/site-packages/absl/flags/_flag.py", line 107, in __init__
        self._set_default(default)
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/site-packages/absl/flags/_flag.py", line 196, in _set_default
        self.default = self._parse(value)
      File "/home/ly/anaconda3/envs/learning/lib/python3.6/site-packages/absl/flags/_flag.py", line 169, in _parse
        'flag --%s=%s: %s' % (self.name, argument, e))
    absl.flags._exceptions.IllegalFlagValueError: flag --train_size=inf: Expect argument to be a string or int, found <class 'float'>
    
    
    opened by arisliang 5
Releases(0.5.0)
Owner
TensorLayer Community
A neutral open community to promote AI technology.
TensorLayer Community
Robbing the FED: Directly Obtaining Private Data in Federated Learning with Modified Models

Robbing the FED: Directly Obtaining Private Data in Federated Learning with Modified Models This repo contains a barebones implementation for the atta

16 Dec 04, 2022
A Keras implementation of YOLOv3 (Tensorflow backend)

keras-yolo3 Introduction A Keras implementation of YOLOv3 (Tensorflow backend) inspired by allanzelener/YAD2K. Quick Start Download YOLOv3 weights fro

7.1k Jan 03, 2023
A production-ready, scalable Indexer for the Jina neural search framework, based on HNSW and PSQL

🌟 HNSW + PostgreSQL Indexer HNSWPostgreSQLIndexer Jina is a production-ready, scalable Indexer for the Jina neural search framework. It combines the

Jina AI 25 Oct 14, 2022
Scaling and Benchmarking Self-Supervised Visual Representation Learning

FAIR Self-Supervision Benchmark is deprecated. Please see VISSL, a ground-up rewrite of benchmark in PyTorch. FAIR Self-Supervision Benchmark This cod

Meta Research 584 Dec 31, 2022
Deep learning models for classification of 15 common weeds in the southern U.S. cotton production systems.

CottonWeeds Deep learning models for classification of 15 common weeds in the southern U.S. cotton production systems. requirements pytorch torchsumma

Dong Chen 8 Jun 07, 2022
A check for whether the dependency jobs are all green.

alls-green A check for whether the dependency jobs are all green. Why? Do you have more than one job in your GitHub Actions CI/CD workflows setup? Do

Re:actors 33 Jan 03, 2023
Multi Agent Reinforcement Learning for ROS in 2D Simulation Environments

IROS21 information To test the code and reproduce the experiments, follow the installation steps in Installation.md. Afterwards, follow the steps in E

11 Oct 29, 2022
Source code for EquiDock: Independent SE(3)-Equivariant Models for End-to-End Rigid Protein Docking (ICLR 2022)

Source code for EquiDock: Independent SE(3)-Equivariant Models for End-to-End Rigid Protein Docking (ICLR 2022) Please cite "Independent SE(3)-Equivar

Octavian Ganea 154 Jan 02, 2023
Stratified Transformer for 3D Point Cloud Segmentation (CVPR 2022)

Stratified Transformer for 3D Point Cloud Segmentation Xin Lai*, Jianhui Liu*, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu, Xiaojuan Qi, Jiaya Jia

DV Lab 195 Jan 01, 2023
A Fast Sequence Transducer Implementation with PyTorch Bindings

transducer A Fast Sequence Transducer Implementation with PyTorch Bindings. The corresponding publication is Sequence Transduction with Recurrent Neur

Awni Hannun 184 Dec 18, 2022
Pytorch implementation of Learning with Opponent-Learning Awareness

Pytorch implementation of Learning with Opponent-Learning Awareness using DiCE

Alexis David Jacq 82 Sep 15, 2022
Multi-Modal Machine Learning toolkit based on PyTorch.

简体中文 | English TorchMM 简介 多模态学习工具包 TorchMM 旨在于提供模态联合学习和跨模态学习算法模型库,为处理图片文本等多模态数据提供高效的解决方案,助力多模态学习应用落地。 近期更新 2022.1.5 发布 TorchMM 初始版本 v1.0 特性 丰富的任务场景:工具

njustkmg 1 Jan 05, 2022
Instant Real-Time Example-Based Style Transfer to Facial Videos

FaceBlit: Instant Real-Time Example-Based Style Transfer to Facial Videos The official implementation of FaceBlit: Instant Real-Time Example-Based Sty

Aneta Texler 131 Dec 19, 2022
Exposure Time Calculator (ETC) and radial velocity precision estimator for the Near InfraRed Planet Searcher (NIRPS) spectrograph

NIRPS-ETC Exposure Time Calculator (ETC) and radial velocity precision estimator for the Near InfraRed Planet Searcher (NIRPS) spectrograph February 2

Nolan Grieves 2 Sep 15, 2022
Class-Balanced Loss Based on Effective Number of Samples. CVPR 2019

Class-Balanced Loss Based on Effective Number of Samples Tensorflow code for the paper: Class-Balanced Loss Based on Effective Number of Samples Yin C

Yin Cui 546 Jan 08, 2023
A modular application for performing anomaly detection in networks

Deep-Learning-Models-for-Network-Annomaly-Detection The modular app consists for mainly three annomaly detection algorithms. The system supports model

Shivam Patel 1 Dec 09, 2021
Stock-history-display - something like a easy yearly review for your stock performance

Stock History Display Available on Heroku: https://stock-history-display.herokua

LiaoJJ 1 Jan 07, 2022
PyTorch implementation of Value Iteration Networks (VIN): Clean, Simple and Modular. Visualization in Visdom.

VIN: Value Iteration Networks This is an implementation of Value Iteration Networks (VIN) in PyTorch to reproduce the results.(TensorFlow version) Key

Xingdong Zuo 215 Dec 07, 2022
Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) in PyTorch

alias-free-gan-pytorch Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) This implementation

Kim Seonghyeon 502 Jan 03, 2023
Code for EMNLP 2021 main conference paper "Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification"

Text-AutoAugment (TAA) This repository contains the code for our paper Text AutoAugment: Learning Compositional Augmentation Policy for Text Classific

LancoPKU 105 Jan 03, 2023