(ICCV 2021) Official code of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing."

Overview

Dressing in Order (DiOr)

๐Ÿ‘š [Paper] ๐Ÿ‘– [Webpage] ๐Ÿ‘— [Running this code]

The official implementation of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing". by Aiyu Cui, Daniel McKee and Svetlana Lazebnik. (ICCV 2021)

๐Ÿ”” Updates

Supported Try-on Applications

Supported Editing Applications

More results

Play with demo.ipynb!


Get Started

Please follow the installation instruction in GFLA to install the environment.

Then run

pip install -r requirements.txt

If one wants to run inference only: You can use later version of PyTorch and you don't need to worry about how to install GFLA's cuda functions. Please specify --frozen_flownet.

Dataset

We run experiments on Deepfashion Dataset. To set up the dataset:

  1. Download and unzip img_highres.zip from the deepfashion inshop dataset at $DATA_ROOT
  2. Download the train/val split and pre-processed keypoints annotations from GFLA source or PATN source, and put the .csv and .lst files at $DATA_ROOT.
    • If one wants to extract the keypoints from scratch, please run OpenPose as the pose estimator. Please follow the instruction from PATN for how to generate the keypoints in desired format.
  3. Run python tools/generate_fashion_dataset.py to split the data. (Please specify the $DATA_ROOT accordingly.)
  4. Get human parsing. You can obtain the parsing by either:
    • Run off-the-shelf human parser SCHP (with LIP labels) on $DATA_ROOT/train and $DATA_ROOT/test. Name the output parses folder as $DATA_ROOT/trainM_lip and $DATA_ROOT/testM_lip respectively.
    • Download the preprocessed parsing from here and put it under $DATA_ROOT.
  5. Download standard_test_anns.txt for fast visualization.

After the processing, you should have the dataset folder formatted like:

+ $DATA_ROOT
|   + train (all training images)
|   |   - xxx.jpg
|   |     ...
|   + trainM_lip (human parse of all training images)
|   |   - xxx.png
|   |     ...
|   + test (all test images)
|   |   - xxx.jpg
|   |     ...
|   + testM_lip (human parse of all test images)
|   |   - xxx.png
|   |     ...
|   - fashion-pairs-train.csv (paired poses for training)
|   - fashion-pairs-test.csv (paired poses for test)
|   - fashion-annotation-train.csv (keypoints for training images)
|   - fashion-annotation-test.csv  (keypoints for test images)
|   - train.lst
|   - test.lst
|   - standard_test_anns.txt

Run Demo

Please download the pretrained weights from here and unzip at checkpoints/.

After downloading the pretrained model and setting the data, you can try out our applications in notebook demo.ipynb.

(The checkpoints above are reproduced, so there could be slightly difference in quantitative evaluation from the reported results. To get the original results, please check our released generated images here.)

(DIORv1_64 was trained with a minor difference in code, but it may give better visual results in some applications. If one wants to try it, specify --netG diorv1.)


Training

Warmup the Global Flow Field Estimator

Note, if you don't want to warmup the Global Flow Field Estimator, you can extract its weights from GFLA by downloading the pretrained weights GFLA from here.

Otherwise, run

sh scripts/run_pose.sh

Training

After warming up the flownet, train the pipeline by

sh scripts/run_train.sh

Run tensorboard --logdir checkpoints/$EXP_NAME/train to check tensorboard. Resetting discriminators may help training when it stucks at local minimals.

Evaluations

To download our generated images (256x176 reported in paper): here.

SSIM, FID and LPIPS

To run evaluation (SSIM, FID and LPIPS) on pose transfer task:

sh scripts/run_eval.sh

Cite us!

If you find this work is helpful, please consider to star ๐ŸŒŸ this repo and cite us as

@article{cui2021dressing,
  title={Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing},
  author={Cui, Aiyu and McKee, Daniel and Lazebnik, Svetlana},
  journal={arXiv preprint arXiv:2104.07021},
  year={2021}
}

Acknowledgements

This repository is built up on GFLA, pytorch-CycleGAN-and-pix2pix, PATN and MUNIT. Please be aware of their licenses when using the code.

Thanks a lot for the great work to the pioneer researchers!

Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers

Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers The repository contains the code to reproduce the experimen

Alessandro Berti 4 Aug 24, 2022
Official Code Release for Container : Context Aggregation Network

Container: Context Aggregation Network Official Code Release for Container : Context Aggregation Network Comparion between CNN, MLP-Mixer and Transfor

peng gao 42 Nov 17, 2021
Eff video representation - Efficient video representation through neural fields

Neural Residual Flow Fields for Efficient Video Representations 1. Download MPI

41 Jan 06, 2023
Try out deep learning models online on Google Colab

Try out deep learning models online on Google Colab

Erdene-Ochir Tuguldur 1.5k Dec 27, 2022
This repository will be a summary and outlook on all our open, medical, AI advancements.

medical by LAION This repository will be a summary and outlook on all our open, medical, AI advancements. See the medical-general channel in the medic

LAION AI 18 Dec 30, 2022
A minimal implementation of Gaussian process regression in PyTorch

pytorch-minimal-gaussian-process In search of truth, simplicity is needed. There exist heavy-weighted libraries, but as you know, we need to go bare b

Sangwoong Yoon 38 Nov 25, 2022
Pre-trained model, code, and materials from the paper "Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation" (MICCAI 2019).

Adaptive Segmentation Mask Attack This repository contains the implementation of the Adaptive Segmentation Mask Attack (ASMA), a targeted adversarial

Utku Ozbulak 53 Jul 04, 2022
Nested cross-validation is necessary to avoid biased model performance in embedded feature selection in high-dimensional data with tiny sample sizes

Pruner for nested cross-validation - Sphinx-Doc Nested cross-validation is necessary to avoid biased model performance in embedded feature selection i

1 Dec 15, 2021
SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model

SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model Edresson Casanova, Christopher Shulby, Eren Gรถlge, Nicolas Michael Mรผller, Frede

Edresson Casanova 92 Dec 09, 2022
SEC'21: Sparse Bitmap Compression for Memory-Efficient Training onthe Edge

Training Deep Learning Models on The Edge Training on the Edge enables continuous learning from new data for deployed neural networks on memory-constr

Brown University Scale Lab 4 Nov 18, 2022
Implementation of "Scaled-YOLOv4: Scaling Cross Stage Partial Network" using PyTorch framwork.

YOLOv4-large This is the implementation of "Scaled-YOLOv4: Scaling Cross Stage Partial Network" using PyTorch framwork. YOLOv4-CSP YOLOv4-tiny YOLOv4-

Kin-Yiu, Wong 2k Jan 02, 2023
Hashformers is a framework for hashtag segmentation with transformers.

Hashtag segmentation is the task of automatically inserting the missing spaces between the words in a hashtag. Hashformers applies Transformer models

Ruan Chaves 41 Nov 09, 2022
Implementation of accepted AAAI 2021 paper: Deep Unsupervised Image Hashing by Maximizing Bit Entropy

Deep Unsupervised Image Hashing by Maximizing Bit Entropy This is the PyTorch implementation of accepted AAAI 2021 paper: Deep Unsupervised Image Hash

62 Dec 30, 2022
Uncertain natural language inference

Uncertain Natural Language Inference This repository hosts the code for the following paper: Tongfei Chen*, Zhengping Jiang*, Adam Poliak, Keisuke Sak

Tongfei Chen 14 Sep 01, 2022
Exploring Simple 3D Multi-Object Tracking for Autonomous Driving (ICCV 2021)

Exploring Simple 3D Multi-Object Tracking for Autonomous Driving Chenxu Luo, Xiaodong Yang, Alan Yuille Exploring Simple 3D Multi-Object Tracking for

QCraft 141 Nov 21, 2022
Offcial repository for the IEEE ICRA 2021 paper Auto-Tuned Sim-to-Real Transfer.

Offcial repository for the IEEE ICRA 2021 paper Auto-Tuned Sim-to-Real Transfer.

47 Jun 30, 2022
A fuzzing framework for SMT solvers

yinyang A fuzzing framework for SMT solvers. Given a set of seed SMT formulas, yinyang generates mutant formulas to stress-test SMT solvers. yinyang c

Project Yin-Yang for SMT Solver Testing 145 Jan 04, 2023
Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm

Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetu

3 Dec 05, 2022
Transformers based fully on MLPs

Awesome MLP-based Transformers papers An up-to-date list of Transformers based fully on MLPs without attention! Why this repo? After transformers and

Fawaz Sammani 35 Dec 30, 2022
Official code for "Simpler is Better: Few-shot Semantic Segmentation with Classifier Weight Transformer. ICCV2021".

Simpler is Better: Few-shot Semantic Segmentation with Classifier Weight Transformer. ICCV2021. Introduction We proposed a novel model training paradi

Lucas 103 Dec 14, 2022