CharacterGAN: Few-Shot Keypoint Character Animation and Reposing

Overview

CharacterGAN

Implementation of the paper "CharacterGAN: Few-Shot Keypoint Character Animation and Reposing" by Tobias Hinz, Matthew Fisher, Oliver Wang, Eli Shechtman, and Stefan Wermter (open with Adobe Acrobat or similar to see visualizations).

Supplementary material can be found here.

Our model can be trained on only a few images (e.g. 10) of a given character labeled with user-chosen keypoints. The resulting model can be used to animate the character on which it was trained by interpolating between its poses specified by their keypoints. We can also repose characters by simply moving the keypoints into the desired positions. To train the model all we need are few images depicting the character in diverse poses from the same viewpoint, keypoints, a file that describes how the keypoints are connected (the characters skeleton) and which keypoints lie in the same layer.

Examples

Animation: For all examples the model was trained on 8-15 images (see first row) of the given character.

Training Images 12 15 9 12 15 15 8
Animation dog_animation maddy_animation ostrich_animation man_animation robot_animation man_animation cow_animation



Frame interpolation: Example of interpolations between two poses with the start and end keypoints highlighted.

man man man man man man man man man man man man man
dog dog dog dog dog dog dog dog dog dog dog dog dog



Reposing: You can use our interactive GUI to easily repose a given character based on keypoints.

Interactive dog_gui man_gui
Gui cow_gui man_gui

Installation

  • python 3.8
  • pytorch 1.7.1
pip install -r requirements.txt

Training

Training Data

All training data for a given character should be in a single folder. We used this website to label our images but there are of course other possibilities.

The folder should contain:

  • all training images (all in the same resolution),
  • a file called keypoints.csv (containing the keypoints for each image),
  • a file called keypoints_skeleton.csv (containing skeleton information, i.e. how keypoints are connected with each other), and
  • a file called keypoints_layers.csv (containing the information about which layer each keypoint resides in).

The structure of the keypoints.csv file is (no header): keypoint_label,x_coord,y_coord,file_name. The first column describes the keypoint label (e.g. head), the next two columns give the location of the keypoint, and the final column states which training image this keypoint belongs to.

The structure of the keypoints_skeleton.csv file is (no header): keypoint,connected_keypoint,connected_keypoint,.... The first column describes which keypoint we are describing in this line, the following columns describe which keypoints are connected to that keypoint (e.g. elbow, shoulder, hand would state that the elbow keypoint should be connected to the shoulder keypoint and the hand keypoint).

The structure of the keypoints_layers.csv file is (no header): keypoint,layer. "Keypoint" is the keypoint label (same as used in the previous two files) and "layer" is an integer value desribing which layer the keypoint resides in.

See our example training data in datasets for examples of both files.

We provide two examples (produced by Zuzana Studená) for training, located in datasets. Our other examples were trained on data from Adobe Stock or from Character Animator and I currently have no license to distribute them. You can purchase the Stock data here:

  • Man: we used all images
  • Dog: we used all images
  • Ostrich: we used the first nine images
  • Cow: we used the first eight images

There are also several websites where you can download Sprite sheets for free.

Train a Model

To train a model with the default parameters from our paper run:

python train.py --gpu_ids 0 --num_keypoints 14 --dataroot datasets/Watercolor-Man --fp16 --name Watercolor-Man

Training one model should take about 60 (FP16) to 90 (FP32) minutes on an NVIDIA GeForce GTX 2080Ti. You can usually use fewer iterations for training and still achieve good results (see next section).

Training Parameters

You can adjust several parameters at train time to possibly improve your results.

  • --name to change the name of the folder in which the results are stored (default is CharacterGAN-Timestamp)
  • --niter 4000 and --niter_decay 4000 to adjust the number of training steps (niter_decayis the number of training steps during which we reduce the learning rate linearly; default is 8000 for both, but you can get good results with fewer iterations)
  • --mask True --output_nc 4 to train with a mask
  • --skeleton False to train without skeleton information
  • --bkg_color 0 to set the background color of the training images to black (default is white, only important if you train with a mask)
  • --batch_size 10 to train with a different batch size (default is 5)

The file options/keypoints.py lets you modify/add/remove keypoints for your characters.

Results

The output is saved to checkpoints/ and we log the training process with Tensorboard. To monitor the progress go to the respective folder and run

 tensorboard --logdir .

Testing

At test time you can either use the model to animate the character or use our interactive GUI to change the position of individual keypoints.

Animate Character

To animate a character (or create interpolations between two images):

python animate_example.py --gpu_ids 0 --model_path checkpoints/Watercolor-Man-.../ --img_animation_list datasets/Watercolor-Man/animation_list.txt --dataroot datasets/Watercolor-Man

--img_animation_list points to a file that lists the images that should be used for animation. The file should contain one file name per line pointing to an image in dataroot. The model then generates an animation by interpolating between the images in the given order. See datasets/Watercolor-Man/animation_list.txt for an example.

You can add --draw_kps to visualize the keypoints in the animation. You can specifiy the gif parameters by setting --num_interpolations 10 and --fps 5. num_interpolations specifies how many images are generated between two real images (from img_animation_list), fps determines the frames per second of the generated gif.

Modify Individual Keypoints

To run the interactive GUI:

python visualizer.py --gpu_ids 0 --model_path checkpoints/Watercolor-Man-.../

Set --gpu_ids -1 to run the model on a CPU. You can also scale the images during visualization, e.g. use --scale 2.

Patch-based Refinement

We use this implementation to run the patch-based refinement step on our generated images. The easiest way to do this is to merge all your training images into a single large image file and use this image file as the style and source image.

Acknowledgements

Our implementation uses code from Pix2PixHD, the TPS augmentation from DeepSIM, and the patch-based refinement code from https://ebsynth.com/ (GitHub).

We would also like to thank Zuzana Studená who produced some of the artwork used in this work.

Citation

If you found this code useful please consider citing:

@article{hinz2021character,
    author    = {Hinz, Tobias and Fisher, Matthew and Wang, Oliver and Shechtman, Eli and Wermter, Stefan},
    title     = {CharacterGAN: Few-Shot Keypoint Character Animation and Reposing},
    journal = {arXiv preprint arXiv:2102.03141},
    year      = {2021}
}
Owner
Tobias Hinz
Research Associate at University of Hamburg
Tobias Hinz
PyTorch/GPU re-implementation of the paper Masked Autoencoders Are Scalable Vision Learners

Masked Autoencoders: A PyTorch Implementation This is a PyTorch/GPU re-implementation of the paper Masked Autoencoders Are Scalable Vision Learners: @

Meta Research 4.8k Jan 04, 2023
PyTorch implementation for paper Neural Marching Cubes.

NMC PyTorch implementation for paper Neural Marching Cubes, Zhiqin Chen, Hao Zhang. Paper | Supplementary Material (to be updated) Citation If you fin

Zhiqin Chen 109 Dec 27, 2022
Data and Code for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning"

Introduction Code and data for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning". We cons

Pan Lu 81 Dec 27, 2022
This repository is for Competition for ML_data class

This repository is for Competition for ML_data class. Based on mmsegmentatoin,mainly using swin transformer to completed the competition.

jianlong 2 Oct 23, 2022
PyTorch implementation of MLP-Mixer

PyTorch implementation of MLP-Mixer MLP-Mixer: an all-MLP architecture composed of alternate token-mixing and channel-mixing operations. The token-mix

Duo Li 33 Nov 27, 2022
PyTorch reimplementation of the Smooth ReLU activation function proposed in the paper "Real World Large Scale Recommendation Systems Reproducibility and Smooth Activations" [arXiv 2022].

Smooth ReLU in PyTorch Unofficial PyTorch reimplementation of the Smooth ReLU (SmeLU) activation function proposed in the paper Real World Large Scale

Christoph Reich 10 Jan 02, 2023
Everything's Talkin': Pareidolia Face Reenactment (CVPR2021)

Everything's Talkin': Pareidolia Face Reenactment (CVPR2021) Linsen Song, Wayne Wu, Chaoyou Fu, Chen Qian, Chen Change Loy, and Ran He [Paper], [Video

71 Dec 21, 2022
Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms

LESA Introduction This repository contains the official implementation of Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Cont

Chenglin Yang 20 Dec 31, 2021
CowHerd is a partially-observed reinforcement learning environment

CowHerd is a partially-observed reinforcement learning environment, where the player walks around an area and is rewarded for milking cows. The cows try to escape and the player can place fences to h

Danijar Hafner 6 Mar 06, 2022
Biomarker identification for COVID-19 Severity in BALF cells Single-cell RNA-seq data

scBALF Covid-19 dataset Analysis Here is the Github page that has the codes for the bioinformatics pipeline described in the paper COVID-Datathon: Bio

Nami Niyakan 2 May 21, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17.3k Dec 29, 2022
RCDNet: A Model-driven Deep Neural Network for Single Image Rain Removal (CVPR2020)

RCDNet: A Model-driven Deep Neural Network for Single Image Rain Removal (CVPR2020) Hong Wang, Qi Xie, Qian Zhao, and Deyu Meng [PDF] [Supplementary M

Hong Wang 6 Sep 27, 2022
A sample pytorch Implementation of ACL 2021 research paper "Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction".

Span-ASTE-Pytorch This repository is a pytorch version that implements Ali's ACL 2021 research paper Learning Span-Level Interactions for Aspect Senti

来自丹麦的天籁 10 Dec 06, 2022
Unicorn can be used for performance analyses of highly configurable systems with causal reasoning

Unicorn can be used for performance analyses of highly configurable systems with causal reasoning. Users or developers can query Unicorn for a performance task.

AISys Lab 27 Jan 05, 2023
Normalization Calibration (NorCal) for Long-Tailed Object Detection and Instance Segmentation

NorCal Normalization Calibration (NorCal) for Long-Tailed Object Detection and Instance Segmentation On Model Calibration for Long-Tailed Object Detec

Tai-Yu (Daniel) Pan 24 Dec 25, 2022
covid question answering datasets and fine tuned models

Covid-QA Fine tuned models for question answering on Covid-19 data. Hosted Inference This model has been contributed to huggingface.Click here to see

Abhijith Neil Abraham 19 Sep 09, 2021
Empower Sequence Labeling with Task-Aware Language Model

LM-LSTM-CRF Check Our New NER Toolkit 🚀 🚀 🚀 Inference: LightNER: inference w. models pre-trained / trained w. any following tools, efficiently. Tra

Liyuan Liu 838 Jan 05, 2023
PyTorch Lightning implementation of Automatic Speech Recognition

lasr Lightening Automatic Speech Recognition An MIT License ASR research library, built on PyTorch-Lightning, for developing end-to-end ASR models. In

Soohwan Kim 40 Sep 19, 2022
Converts given image (png, jpg, etc) to amogus gif.

Image to Amogus Converter Converts given image (.png, .jpg, etc) to an amogus gif! Usage Place image in the /target/ folder (or anywhere realistically

Hank Magan 1 Nov 24, 2021
Repository for the paper "PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation", CVPR 2021.

PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation Code repository for the paper: PoseAug: A Differentiable Pose Augme

Pyjcsx 328 Dec 17, 2022