Deep motion transfer

Overview

animation-with-keypoint-mask

Paper

The right most square is the final result. Softmax mask (circles):


\

Heatmap mask:



\

conda env create -f environment.yml
conda activate venv11
We use pytorch 1.7.1 with python 3.8.
Please obtain pretrained keypoint module. You can do so by
git checkout fomm-new-torch
Then, follow the instructions from the README of that branch, or obtain a pre-trained checkpoint from
https://github.com/AliaksandrSiarohin/first-order-model

training

to train a model on specific dataset run:

CUDA_VISIBLE_DEVICES=0,1,2,3 python run.py --config config/dataset_name.yaml --device_ids 0,1,2,3 --checkpoint_with_kp path/to/checkpoint/with/pretrained/kp

E.g. taichi-256-q.yaml for the keypoint heatmap mask model, and taichi-256-softmax-q.yaml for drawn circular keypoints instead.

the code will create a folder in the log directory (each run will create a time-stamped new directory). checkpoints will be saved to this folder. to check the loss values during training see log.txt. you can also check training data reconstructions in the train-vis sub-folder. by default the batch size is tuned to run on 4 titan-x gpu (apart from speed it does not make much difference). You can change the batch size in the train_params in corresponding .yaml file.

evaluation on video reconstruction

To evaluate the reconstruction of the driving video from its first frame, run:

CUDA_VISIBLE_DEVICES=0 python run.py --config config/dataset_name.yaml --mode reconstruction --checkpoint path/to/checkpoint --checkpoint_with_kp path/to/checkpoint/with/pretrained/kp

you will need to specify the path to the checkpoint, the reconstruction sub-folder will be created in the checkpoint folder. the generated video will be stored to this folder, also generated videos will be stored in png subfolder in loss-less '.png' format for evaluation. instructions for computing metrics from the paper can be found: https://github.com/aliaksandrsiarohin/pose-evaluation.

image animation

In order to animate a source image with motion from driving, run:

CUDA_VISIBLE_DEVICES=0 python run.py --config config/dataset_name.yaml --mode animate --checkpoint path/to/checkpoint --checkpoint_with_kp path/to/checkpoint/with/pretrained/kp

you will need to specify the path to the checkpoint, the animation sub-folder will be created in the same folder as the checkpoint. you can find the generated video there and its loss-less version in the png sub-folder. by default video from test set will be randomly paired, but you can specify the "source,driving" pairs in the corresponding .csv files. the path to this file should be specified in corresponding .yaml file in pairs_list setting.

datasets

  1. taichi. follow the instructions in data/taichi-loading or instructions from https://github.com/aliaksandrsiarohin/video-preprocessing.

training on your own dataset

  1. resize all the videos to the same size e.g 256x256, the videos can be in '.gif', '.mp4' or folder with images. we recommend the later, for each video make a separate folder with all the frames in '.png' format. this format is loss-less, and it has better i/o performance.

  2. create a folder data/dataset_name with 2 sub-folders train and test, put training videos in the train and testing in the test.

  3. create a config config/dataset_name.yaml, in dataset_params specify the root dir the root_dir: data/dataset_name. also adjust the number of epoch in train_params.

additional notes

citation:

@misc{toledano2021,
  author = {Or Toledano and Yanir Marmor and Dov Gertz},
  title = {Image Animation with Keypoint Mask},
  year = {2021},
  eprint={2112.10457},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
}

Old format (before paper):

@misc{toledano2021,
  author = {Or Toledano and Yanir Marmor and Dov Gertz},
  title = {Image Animation with Keypoint Mask},
  year = {2021},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/or-toledano/animation-with-keypoint-mask}},
  commit = {015b1f2d466658141c41ea67d7356790b5cded40}
}
Code for the CVPR2021 workshop paper "Noise Conditional Flow Model for Learning the Super-Resolution Space"

NCSR: Noise Conditional Flow Model for Learning the Super-Resolution Space Official NCSR training PyTorch Code for the CVPR2021 workshop paper "Noise

57 Oct 03, 2022
SeMask: Semantically Masked Transformers for Semantic Segmentation.

SeMask: Semantically Masked Transformers Jitesh Jain, Anukriti Singh, Nikita Orlov, Zilong Huang, Jiachen Li, Steven Walton, Humphrey Shi This repo co

Picsart AI Research (PAIR) 186 Dec 30, 2022
An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi

MetaICL: Learning to Learn In Context This includes an original implementation of "MetaICL: Learning to Learn In Context" by Sewon Min, Mike Lewis, Lu

Meta Research 141 Jan 07, 2023
Happywhale - Whale and Dolphin Identification Silver🥈 Solution (26/1588)

Kaggle-Happywhale Happywhale - Whale and Dolphin Identification Silver 🥈 Solution (26/1588) 竞赛方案思路 图像数据预处理-标志性特征图片裁剪:首先根据开源的标注数据训练YOLOv5x6目标检测模型,将训练集

Franxx 20 Nov 14, 2022
EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21)

EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21) Citation If y

addisonwang 18 Nov 11, 2022
You Only 👀 One Sequence

You Only 👀 One Sequence TL;DR: We study the transferability of the vanilla ViT pre-trained on mid-sized ImageNet-1k to the more challenging COCO obje

Hust Visual Learning Team 666 Jan 03, 2023
A Simulated Optimal Intrusion Response Game

Optimal Intrusion Response An OpenAI Gym interface to a MDP/Markov Game model for optimal intrusion response of a realistic infrastructure simulated u

Kim Hammar 10 Dec 09, 2022
ICML 21 - Voice2Series: Reprogramming Acoustic Models for Time Series Classification

Voice2Series-Reprogramming Voice2Series: Reprogramming Acoustic Models for Time Series Classification International Conference on Machine Learning (IC

49 Jan 03, 2023
Pytorch Lightning Distributed Accelerators using Ray

Distributed PyTorch Lightning Training on Ray This library adds new PyTorch Lightning accelerators for distributed training using the Ray distributed

166 Dec 27, 2022
NL-Augmenter 🦎 → 🐍 A Collaborative Repository of Natural Language Transformations

NL-Augmenter 🦎 → 🐍 The NL-Augmenter is a collaborative effort intended to add transformations of datasets dealing with natural language. Transformat

684 Jan 09, 2023
Predict halo masses from simulations via graph neural networks

HaloGraphNet Predict halo masses from simulations via Graph Neural Networks. Given a dark matter halo and its galaxies, creates a graph with informati

Pablo Villanueva Domingo 20 Nov 15, 2022
Active learning for Mask R-CNN in Detectron2

MaskAL - Active learning for Mask R-CNN in Detectron2 Summary MaskAL is an active learning framework that automatically selects the most-informative i

49 Dec 20, 2022
Machine Learning toolbox for Humans

Reproducible Experiment Platform (REP) REP is ipython-based environment for conducting data-driven research in a consistent and reproducible way. Main

Yandex 662 Nov 20, 2022
Torchyolo - Yolov3 ve Yolov4 modellerin Pytorch uygulamasıdır

TORCHYOLO : Yolo Modellerin Pytorch Uygulaması Yapılacaklar: Yolov3 model.py ve

Kadir Nar 3 Aug 22, 2022
PenguinSpeciesPredictionML - Basic model to predict Penguin species based on beak size and sex.

Penguin Species Prediction (ML) 🐧 👨🏽‍💻 What? 💻 This project is a basic model using sklearn methods to predict Penguin species based on beak size

Tucker Paron 0 Jan 08, 2022
Repository for the electrical and ICT benchmark model developed in the ERIGrid 2.0 project.

Benchmark Model Electrical and ICT System This repository contains the documentation, code, and models for the electrical and ICT benchmark model deve

ERIGrid 2.0 1 Nov 29, 2021
KAPAO is an efficient multi-person human pose estimation model that detects keypoints and poses as objects and fuses the detections to predict human poses.

KAPAO (Keypoints and Poses as Objects) KAPAO is an efficient single-stage multi-person human pose estimation model that models keypoints and poses as

Will McNally 664 Dec 30, 2022
Codes accompanying the paper "Believe What You See: Implicit Constraint Approach for Offline Multi-Agent Reinforcement Learning" (NeurIPS 2021 Spotlight

Implicit Constraint Q-Learning This is a pytorch implementation of ICQ on Datasets for Deep Data-Driven Reinforcement Learning (D4RL) and ICQ-MA on SM

42 Dec 23, 2022
Interactive web apps created using geemap and streamlit

geemap-apps Introduction This repo demostrates how to build a multi-page Earth Engine App using streamlit and geemap. You can deploy the app on variou

Qiusheng Wu 27 Dec 23, 2022