3D-aware GANs based on NeRF (arXiv).

Overview

CIPS-3D

This repository will contain the code of the paper,
CIPS-3D: A 3D-Aware Generator of GANs Based on Conditionally-Independent Pixel Synthesis.

We are planning to publish the training code here in December. But if the github star reaches two hundred, I will advance the date. Stay tuned 🕙 .

Demo videos

demo1.mp4
demo2.mp4
demo_animal_finetuned.mp4
demo3.mp4
demo4.mp4
demo5.mp4

Mirror symmetry problem

The problem of mirror symmetry refers to the sudden change of the direction of the bangs near the yaw angle of pi/2. We propose to use an auxiliary discriminator to solve this problem (please see the paper).

Note that in the initial stage of training, the auxiliary discriminator must dominate the generator more than the main discriminator does. Otherwise, if the main discriminator dominates the generator, the mirror symmetry problem will still occur. In practice, progressive training is able to guarantee this. We have trained many times from scratch. Adding an auxiliary discriminator stably solves the mirror symmetry problem. If you find any problems with this idea, please open an issue.

Envs


Training


Citation

If you find our work useful in your research, please cite:


@article{zhou2021CIPS3D,
  title = {{{CIPS}}-{{3D}}: A {{3D}}-{{Aware Generator}} of {{GANs Based}} on {{Conditionally}}-{{Independent Pixel Synthesis}}},
  shorttitle = {{{CIPS}}-{{3D}}},
  author = {Zhou, Peng and Xie, Lingxi and Ni, Bingbing and Tian, Qi},
  year = {2021},
  eprint = {2110.09788},
  eprinttype = {arxiv},
  primaryclass = {cs, eess},
  archiveprefix = {arXiv}
}

Acknowledgments

Comments
  • CUDA error: out of memory

    CUDA error: out of memory

    Hi guy, There is an issue CUDA error: out of memory (even with batch size = 1) when I try to run training script with this command CUDA_VISIBLE_DEVICES=2 python -c "import sys; sys.path.append('./'); from exp.tests.test_cips3d import Testing_ffhq_exp; Testing_ffhq_exp().test_train_ffhq(debug=False)" --tl_opts batch_size 1 img_size 32 total_iters 80000

    I try to run on V100 GPU with 32Gb mem. What should I do? Btw, really appreciate your work, a great paper. 👏

    image

    opened by longnhatne 7
  • Problem about reproducing the results

    Problem about reproducing the results

    Hi, PeterouZh,

    I'm reproducing your results at the same pace with you. Honestly speaking, this model takes about 40 hours to reach 64x64 at FID 15.97 with 8 A100 gpus. While I change the resolution to 128x128, the FID reach to 23.58. I'm still traning it and it reach FID 20.03 yet.

    How can this model reach FID 6.XX as you described in paper? Do we miss some key things? It looks that this model can only reach 10+ FID in 256 resolutions because the performance increases very lowly when the FID reach 16 at 64x64 resolution.

    By the way, I try to reproduce your results few weeks ago but I met problems about moxing. Does moxing provide very important tricks for this work?

    opened by 0three 7
  • The quality of generated images for FFHQ

    The quality of generated images for FFHQ

    Hello,

    Thanks for sharing your source code and pre-trained weights. I am trying to generate high-quality images from FFHQ pre-trained model. However, the quality of generated images is not as good enough as stated in the paper. I could not reproduce the results.

    I am using the pre-trained weights from here https://github.com/PeterouZh/CIPS-3D/releases/tag/v0.0.2

    The command I tried: python exp/cips3d/scripts/sample_images.py --tl_config_file exp/cips3d/configs/ffhq_exp.yaml --tl_command sample_images

    Generated images: 0048220334 0038712131 0002215104

    Do you have any idea regarding the problem?

    opened by enisimsar 6
  • How can I get an image resolution greater than 256?

    How can I get an image resolution greater than 256?

    Hi! You did a great job, thanks for such a great paper and promptly published CIPS-3D code.

    I've already gotten good results with your pipeline, but for images with resolution 64x64. Now I'm waiting the results of generating images with a resolution of 128x128. And I will further train for higher resolution images.

    I understand correctly, in order to get 512x512 images, I need to convert the original FFHQ dataset once again through your script dataset_tool.py, but specifying the resize for 512 in it? And after I need to run training pipeline with lower values for generator learning rate and discriminator learning rate?

    Thanks!

    opened by gofixyourself 4
  • > I want test some other image on your model. But I dont konw how to do it. If I have image sequence with pose data,how to test?

    > I want test some other image on your model. But I dont konw how to do it. If I have image sequence with pose data,how to test?

    I want test some other image on your model. But I dont konw how to do it. If I have image sequence with pose data,how to test?

    1. Align the images in the way of StyleGAN. You can refer to this script align_images.py.
    2. Project the aligned images into the W space, also known as GAN inversion. Different from the common 2D inversion, you'd better set an appropriate yaw/pitch/fov for the CIPS-3D generator to make the initial pose of G(w) and the image to be inverted consistent.
    3. After you get the w of the image, you can reconstruct images of different styles using G'(w). G' can be obtained by interpolating generators of different domains.

    Hope this helps.

    Originally posted by @PeterouZh in https://github.com/PeterouZh/CIPS-3D/issues/7#issuecomment-963163677

    opened by zhywanna 2
  • Configuration environment issues

    Configuration environment issues

    Hi,good job!

    I have a problem, please help me.

    pip install -e torch_fidelity_lib ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /media/sdb/wd/test_code/CIPS-3D/torch_fidelity_lib

    opened by Stephanie-ustc 2
  • The pretrained model can be used in finetune_photo2cartoon.sh?

    The pretrained model can be used in finetune_photo2cartoon.sh?

    I load the FFHQ pretrained model from Pre-trained checkpoints. And change the finetune_dir as Pre-trained checkpoints in finetune_photo2cartoon.sh. But it seems not to work. I want to know if the pre-trained model can be used in finetune_photo2cartoon.sh?

    opened by Benwang-chen 1
  • A few questions

    A few questions

    Dear Dr.Zhou, Thanks for sharing your great job and congratulations on your graduating Ph.D ! I have a few questions and hoping for your reply.

    1、I found a command in another issue https://github.com/PeterouZh/CIPS-3D/issues/31#issue-1196645855 python exp/cips3d/scripts/sample_images.py --tl_config_file exp/cips3d/configs/ffhq_exp.yaml --tl_command sample_images But I can't find those arguments in sample_images.py and confuse about why he knows how to use. And I also found some packages import from tl2 library, but failed to find any documentation. I wonder if there are any instruction i miss in addition to README. 2、I saw two generators file in /CIPS-3D/exp/cips3d/models generator.py and generator_v1.py, which one should I use ? 3、Which class in generator files indicates the complete generator module cause I want to do some inversion tests and not sure whether it's class GeneratorNerfINR ? And the G_ema.pth or generator.pth in ckpt is the corresponding parametors to the generator which I can directly load into, am i right? 4、What is the use of state_dict.pth in ckpt.

    By the way, I think using Chinese is more convenience for us. Thanks!

    opened by zhywanna 1
  • Output images with gradient during inference

    Output images with gradient during inference

    Hi there,

    I try to output the image with the gradient. However, I found that if I use your default testing code, it will call whole_grad_forward (https://github.com/PeterouZh/CIPS-3D/blob/aee40251a02c34e58d3002bcb845151c41b538f0/exp/dev/nerf_inr/models/generator_nerf_inr_v16.py#L1395), and will remove the gradient. If I comment out the torch.no_grad(), it would be out of memory. Is there a way to output the image with gradient? Thanks

    opened by lelechen63 1
  • closed

    closed

    Hi,

    Thanks for the great work. I am trying to inverse the image into w/z using the pretrained model. So would you release the pretrained discriminator to enable the inversion feature? Thanks

    opened by lelechen63 1
  • Question about the input of shallow nerf network

    Question about the input of shallow nerf network

    I know nerf is a view-dependent synthesis method due to a direction input. However, in your code. I find you don't use it. Why can cips3d still work? just input the world coordinate can achieve new view synthesis? why?

    opened by shoutOutYangJie 1
  • Why not train from scratch?

    Why not train from scratch?

    您好,感谢您的开源代码。

    在Readme中您有说明,生成高分辨率时的训练流程是32->64->128->256, 每次训练都基于前一分辨率得到的model进行finetune。 这样的训练策略的确会比直接训练要容易得多,那请问您试过直接训练256分辨率吗,调整训练参数是否也能得到类似的效果?

    opened by BlingHe 0
  • How to view G model effects?web_demo.py only 3 same pics

    How to view G model effects?web_demo.py only 3 same pics

    How to view G model effects?

    run web_demo.py like this , web only display 3 same pictures,1picture display nothing(be black).

    image

    image

    web_demo.py like below : image

    opened by jojoWd 0
  • Can I put my face photo in your pre-trained  web_ Demo to generate a  3D? video

    Can I put my face photo in your pre-trained web_ Demo to generate a 3D? video

    Hello, thank you for your contribution. I try to run your web_ Demo. I saw you say"Thus current stylization is limited to randomly generated images. To edit a real image, we need to project the image to the latent space of the generator. ".So I can't import other face images to produce the effect like the demo-video? Thank you.

    opened by lemonsstyle 0
  • How to set the near and far plane in NeRF network?

    How to set the near and far plane in NeRF network?

    Thanks for your excellent work. I am curious why you set the ray_near and ray_end to 0.88 and 1.12? (and for other variables like h_stddev etc.) Is that set empirically?

    opened by cwchenwang 1
  • add web demo/model to Huggingface

    add web demo/model to Huggingface

    Hi, would you be interested in adding CIPS-3D to Hugging Face? The Hub offers free hosting, and it would make your work more accessible and visible to the rest of the ML community.

    Example from other organizations: Keras: https://huggingface.co/keras-io Microsoft: https://huggingface.co/microsoft Facebook: https://huggingface.co/facebook

    Example spaces with repos: github: https://github.com/salesforce/BLIP Spaces: https://huggingface.co/spaces/salesforce/BLIP

    github: https://github.com/facebookresearch/omnivore Spaces: https://huggingface.co/spaces/akhaliq/omnivore

    and here are guides for adding spaces/models/datasets to your org

    How to add a Space: https://huggingface.co/blog/gradio-spaces how to add models: https://huggingface.co/docs/hub/adding-a-model uploading a dataset: https://huggingface.co/docs/datasets/upload_dataset.html

    Please let us know if you would be interested and if you have any questions, we can also help with the technical implementation.

    opened by AK391 1
Owner
Peterou
I have trained thousands of GAN models in the past three years, including WGAN, BigGAN, and StyleGAN.
Peterou
The code for the CVPR 2021 paper Neural Deformation Graphs, a novel approach for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects.

Neural Deformation Graphs Project Page | Paper | Video Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction Aljaž Božič, Pablo P

Aljaz Bozic 134 Dec 16, 2022
License Plate Detection Application

LicensePlate_Project 🚗 🚙 [Project] 2021.02 ~ 2021.09 License Plate Detection Application Overview 1. 데이터 수집 및 라벨링 차량 번호판 이미지를 직접 수집하여 각 이미지에 대해 '번호판

4 Oct 10, 2022
FAVD: Featherweight Assisted Vulnerability Discovery

FAVD: Featherweight Assisted Vulnerability Discovery This repository contains the replication package for the paper "Featherweight Assisted Vulnerabil

secureIT 4 Sep 16, 2022
Freecodecamp Scientific Computing with Python Certification; Solution for Challenge 2: Time Calculator

Assignment Write a function named add_time that takes in two required parameters and one optional parameter: a start time in the 12-hour clock format

Hellen Namulinda 0 Feb 26, 2022
TensorFlow implementation of the paper "Hierarchical Attention Networks for Document Classification"

Hierarchical Attention Networks for Document Classification This is an implementation of the paper Hierarchical Attention Networks for Document Classi

Quoc-Tuan Truong 83 Dec 05, 2022
[CVPR 2021] MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition

MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition (CVPR 2021) arXiv Prerequisite PyTorch = 1.2.0 Python3 torchvision PIL argpar

51 Nov 11, 2022
AI Flow is an open source framework that bridges big data and artificial intelligence.

Flink AI Flow Introduction Flink AI Flow is an open source framework that bridges big data and artificial intelligence. It manages the entire machine

144 Dec 30, 2022
SingleVC performs any-to-one VC, which is an important component of MediumVC project.

SingleVC performs any-to-one VC, which is an important component of MediumVC project. Here is the official implementation of the paper, MediumVC.

谷下雨 26 Dec 28, 2022
Semantic Image Synthesis with SPADE

Semantic Image Synthesis with SPADE New implementation available at imaginaire repository We have a reimplementation of the SPADE method that is more

NVIDIA Research Projects 7.3k Jan 07, 2023
[NeurIPS 2020] Blind Video Temporal Consistency via Deep Video Prior

pytorch-deep-video-prior (DVP) Official PyTorch implementation for NeurIPS 2020 paper: Blind Video Temporal Consistency via Deep Video Prior TensorFlo

Yazhou XING 90 Oct 19, 2022
A repository with exploration into using transformers to predict DNA ↔ transcription factor binding

Transcription Factor binding predictions with Attention and Transformers A repository with exploration into using transformers to predict DNA ↔ transc

Phil Wang 62 Dec 20, 2022
Code release for BlockGAN: Learning 3D Object-aware Scene Representations from Unlabelled Images

BlockGAN Code release for BlockGAN: Learning 3D Object-aware Scene Representations from Unlabelled Images BlockGAN: Learning 3D Object-aware Scene Rep

41 May 18, 2022
Off-policy continuous control in PyTorch, with RDPG, RTD3 & RSAC

arXiv technical report soon available. we are updating the readme to be as comprehensive as possible Please ask any questions in Issues, thanks. Intro

Zhihan 31 Dec 30, 2022
Byte-based multilingual transformer TTS for low-resource/few-shot language adaptation.

One model to speak them all 🌎 Audio Language Text ▷ Chinese 人人生而自由,在尊严和权利上一律平等。 ▷ English All human beings are born free and equal in dignity and rig

Mutian He 60 Nov 14, 2022
🏆 The 1st Place Submission to AICity Challenge 2021 Natural Language-Based Vehicle Retrieval Track (Alibaba-UTS submission)

AI City 2021: Connecting Language and Vision for Natural Language-Based Vehicle Retrieval 🏆 The 1st Place Submission to AICity Challenge 2021 Natural

82 Dec 29, 2022
KE-Dialogue: Injecting knowledge graph into a fully end-to-end dialogue system.

Learning Knowledge Bases with Parameters for Task-Oriented Dialogue Systems This is the implementation of the paper: Learning Knowledge Bases with Par

CAiRE 42 Nov 10, 2022
ImageNet Adversarial Image Evaluation

ImageNet Adversarial Image Evaluation This repository contains the code and some materials used in the experimental work presented in the following pa

Utku Ozbulak 11 Dec 26, 2022
A MatConvNet-based implementation of the Fully-Convolutional Networks for image segmentation

MatConvNet implementation of the FCN models for semantic segmentation This package contains an implementation of the FCN models (training and evaluati

VLFeat.org 175 Feb 18, 2022
An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi

MetaICL: Learning to Learn In Context This includes an original implementation of "MetaICL: Learning to Learn In Context" by Sewon Min, Mike Lewis, Lu

Meta Research 141 Jan 07, 2023
Forecasting with Gradient Boosted Time Series Decomposition

ThymeBoost ThymeBoost combines time series decomposition with gradient boosting to provide a flexible mix-and-match time series framework for spicy fo

131 Jan 08, 2023