Demo programs for the Talking Head Anime from a Single Image 2: More Expressive project.

Overview

Demo Code for "Talking Head Anime from a Single Image 2: More Expressive"

This repository contains demo programs for the Talking Head Anime from a Single Image 2: More Expressive project. Similar to the previous version, it has two programs:

  • The manual_poser lets you manipulate the facial expression and the head rotation of an anime character, given in a single image, through a graphical user interface. The poser is available in two forms: a standard GUI application, and a Jupyter notebook.
  • The ifacialmocap_puppeteer lets you transfer your facial motion, captured by a commercial iOS application called iFacialMocap, to an image of an anime character.

Try the Manual Poser on Google Colab

If you do not have the required hardware (discussed below) or do not want to download the code and set up an environment to run it, click this link to try running the manual poser on Google Colab.

Hardware Requirements

Both programs require a recent and powerful Nvidia GPU to run. I could personally ran them at good speed with the Nvidia Titan RTX. However, I think recent high-end gaming GPUs such as the RTX 2080, the RTX 3080, or better would do just as well.

The ifacialmocap_puppeteer requires an iOS device that is capable of computing blend shape parameters from a video feed. This means that the device must be able to run iOS 11.0 or higher and must have a TrueDepth front-facing camera. (See this page for more info.) In other words, if you have the iPhone X or something better, you should be all set. Personally, I have used an iPhone 12 mini.

Software Requirements

Both programs were written in Python 3. To run the GUIs, the following software packages are required:

  • Python >= 3.8
  • PyTorch >= 1.7.1 with CUDA support
  • SciPY >= 1.6.0
  • wxPython >= 4.1.1
  • Matplotlib >= 3.3.4

In particular, I created the environment to run the programs with Anaconda, using the following commands:

> conda create -n talking-head-anime-2-demo python=3.8
> conda activate talking-head-anime-2-demo
> conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
> conda install scipy
> pip install wxPython
> conda install matplotlib

To run the Jupyter notebook version of the manual_poser, you also need:

  • Jupyter Notebook >= 6.2.0
  • IPyWidgets >= 7.6.3

This means that, in addition to the commands above, you also need to run:

> conda install -c conda-forge notebook
> conda install -c conda-forge ipywidgets
> jupyter nbextension enable --py widgetsnbextension

Lastly, the ifacialmocap_puppeteer requires iFacialMocap, which is available in the App Store for 980 yen. You also need to install the paired desktop application on your PC or Mac. (Linux users, I'm sorry!) Your iOS and your computer must also use the same network. (For example, you may connect them to the same wireless router.)

Automatic Environment Construction with Anaconda

You can also use Anaconda to download and install all Python packages in one command. Open your shell, change the directory to where you clone the repository, and run:

conda env create -f environment.yml

This will create an environment called talking-head-anime-2-demo containing all the required Python packages.

Download the Model

Before running the programs, you need to download the model files from this Dropbox link and unzip it to the data folder of the repository's directory. In the end, the data folder should look like:

+ data
  + illust
    - waifu_00.png
    - waifu_01.png
    - waifu_02.png
    - waifu_03.png
    - waifu_04.png
    - waifu_05.png
    - waifu_06.png
    - waifu_06_buggy.png
  - combiner.pt
  - eyebrow_decomposer.pt
  - eyebrow_morphing_combiner.pt
  - face_morpher.pt
  - two_algo_face_rotator.pt

The model files are distributed with the Creative Commons Attribution 4.0 International License, which means that you can use them for commercial purposes. However, if you distribute them, you must, among other things, say that I am the creator.

Running the manual_poser Desktop Application

Open a shell. Change your working directory to the repository's root directory. Then, run:

> python tha2/app/manual_poser.py

Note that before running the command above, you might have to activate the Python environment that contains the required packages. If you created an environment using Anaconda as was discussed above, you need to run

> conda activate talking-head-anime-2-demo

if you have not already activated the environment.

Running the manual_poser Jupyter Notebook

Open a shell. Activate the environment. Change your working directory to the repository's root directory. Then, run:

> jupyter notebook

A browser window should open. In it, open tha2.ipynb. Once you have done so, you should see that it only has one cell. Run it. Then, scroll down to the end of the document, and you'll see the GUI there.

Running the ifacialmocap_puppeteer

First, run iFacialMocap on your iOS device. It should show you the device's IP address. Jot it down. Keep the app open.

IP address in iFacialMocap screen

Then, run the companion desktop application.

iFaciaMocap desktop application

Click "Open Advanced Setting >>". The application should expand.

Click the 'Open Advanced Setting >>' button.

Click the button that says "Maya" on the right side.

Click the 'Maya' button.

Then, click "Blender."

Select 'Blender' mode in the desktop application

Next, replace the IP address on the left side with your iOS device's IP address.

Replace IP address with device's IP address.

Click "Connect to Blender."

Click 'Connect to Blender.'

Open a shell. Activate the environment. Change your working directory to the repository's root directory. Then, run:

> python tha2/app/ifacialmocap_puppeteer.py

If the programs are connected properly, you should see that the many progress bars at the bottom of the ifacialmocap_puppeteer window should move when you move your face in front of the iOS device's front-facing camera.

You should see the progress bars moving.

If all is well, load an character image, and it should follow your facial movement.

Constraints on Input Images

In order for the model to work well, the input image must obey the following constraints:

  • It must be of size 256 x 256.
  • It must be of PNG format.
  • It must have an alpha channel.
  • It must contain only one humanoid anime character.
  • The character must be looking straight ahead.
  • The head of the character should be roughly contained in the middle 128 x 128 box.
  • All pixels that do not belong to the character (i.e., background pixels) should have RGBA = (0,0,0,0).

Image specification

FAQ: I prepared an image just like you said, why is my output so ugly?!?

This is most likely because your image does not obey the "background RGBA = (0,0,0,0)" constraint. In other words, your background pixels are (RRR,GGG,BBB,0) for some RRR, GGG, BBB > 0 rather than (0,0,0,0). This happens when you use Photoshop because it does not clear the RGB channels of transparent pixels.

Let's see an example. When I tried to use the manual_poser with data/illust/waifu_06_buggy.png. Here's what I got.

A failure case

When you look at the image, there seems to be nothing wrong with it.

waifu_06_buggy.png

However, if you inspect it with GIMP, you will see that the RGB channels have what backgrounds, which means that those pixels have non-zero RGB values.

In the buggy image, background pixels have colors in the RGB channels.

What you want, instead, is something like the non-buggy version: data/illust/waifu_06.png, which looks exactly the same as the buggy one to the naked eyes.

waifu_06.png

However, in GIMP, all channels have black backgrounds.

In the good image, background pixels do not have colors in any channels.

Because of this, the output was clean.

A success case

A way to make sure that your image works well with the model is to prepare it with GIMP. When exporting your image to the PNG format, make sure to uncheck "Save color values from transparent pixels" before you hit "Export."

Make sure to uncheck 'Save color values from transparent pixels' before exporting!

Disclaimer

While the author is an employee of Google Japan, this software is not Google's product and is not supported by Google.

The copyright of this software belongs to me as I have requested it using the IARC process. However, Google might claim the rights to the intellectual property of this invention.

The code is released under the MIT license. The model is released under the Creative Commons Attribution 4.0 International License.

Owner
Pramook Khungurn
A software developer from Thailand, interested in computer graphics, machine learning, and algorithms.
Pramook Khungurn
This repository is home to the Optimus data transformation plugins for various data processing needs.

Transformers Optimus's transformation plugins are implementations of Task and Hook interfaces that allows execution of arbitrary jobs in optimus. To i

Open Data Platform 37 Dec 14, 2022
Framework for fine-tuning pretrained transformers for Named-Entity Recognition (NER) tasks

NERDA Not only is NERDA a mesmerizing muppet-like character. NERDA is also a python package, that offers a slick easy-to-use interface for fine-tuning

Ekstra Bladet 141 Dec 30, 2022
This code is the implementation of Text Emotion Recognition (TER) with linguistic features

APSIPA-TER This code is the implementation of Text Emotion Recognition (TER) with linguistic features. The network model is BERT with a pretrained mod

kenro515 1 Feb 08, 2022
Simple Text-Generator with OpenAI gpt-2 Pytorch Implementation

GPT2-Pytorch with Text-Generator Better Language Models and Their Implications Our model, called GPT-2 (a successor to GPT), was trained simply to pre

Tae-Hwan Jung 775 Jan 08, 2023
Simple, Pythonic, text processing--Sentiment analysis, part-of-speech tagging, noun phrase extraction, translation, and more.

TextBlob: Simplified Text Processing Homepage: https://textblob.readthedocs.io/ TextBlob is a Python (2 and 3) library for processing textual data. It

Steven Loria 8.4k Dec 26, 2022
基于“Seq2Seq+前缀树”的知识图谱问答

KgCLUE-bert4keras 基于“Seq2Seq+前缀树”的知识图谱问答 简介 博客:https://kexue.fm/archives/8802 环境 软件:bert4keras=0.10.8 硬件:目前的结果是用一张Titan RTX(24G)跑出来的。 运行 第一次运行的时候,会给知

苏剑林(Jianlin Su) 65 Dec 12, 2022
Augmenty is an augmentation library based on spaCy for augmenting texts.

Augmenty: The cherry on top of your NLP pipeline Augmenty is an augmentation library based on spaCy for augmenting texts. Besides a wide array of high

Kenneth Enevoldsen 124 Dec 29, 2022
An open-source NLP library: fast text cleaning and preprocessing.

An open-source NLP library: fast text cleaning and preprocessing

Iaroslav 21 Mar 18, 2022
Ongoing research training transformer language models at scale, including: BERT & GPT-2

Megatron (1 and 2) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA.

NVIDIA Corporation 3.5k Dec 30, 2022
pkuseg多领域中文分词工具; The pkuseg toolkit for multi-domain Chinese word segmentation

pkuseg:一个多领域中文分词工具包 (English Version) pkuseg 是基于论文[Luo et. al, 2019]的工具包。其简单易用,支持细分领域分词,有效提升了分词准确度。 目录 主要亮点 编译和安装 各类分词工具包的性能对比 使用方式 论文引用 作者 常见问题及解答 主要

LancoPKU 6k Dec 29, 2022
Telegram bot to auto post messages of one channel in another channel as soon as it is posted, without the forwarded tag.

Channel Auto-Post Bot This bot can send all new messages from one channel, directly to another channel (or group, just in case), without the forwarded

Aditya 128 Dec 29, 2022
CoNLL-English NER Task (NER in English)

CoNLL-English NER Task en | ch Motivation Course Project review the pytorch framework and sequence-labeling task practice using the transformers of Hu

Kevin 2 Jan 14, 2022
Code for CodeT5: a new code-aware pre-trained encoder-decoder model.

CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation This is the official PyTorch implementation

Salesforce 564 Jan 08, 2023
Simple program that translates the name of files into English

Simple program that translates the name of files into English. Useful for when editing/inspecting programs that were developed in a foreign language.

0 Dec 22, 2021
Named-entity recognition using neural networks. Easy-to-use and state-of-the-art results.

NeuroNER NeuroNER is a program that performs named-entity recognition (NER). Website: neuroner.com. This page gives step-by-step instructions to insta

Franck Dernoncourt 1.6k Dec 27, 2022
Russian GPT3 models.

Russian GPT-3 models (ruGPT3XL, ruGPT3Large, ruGPT3Medium, ruGPT3Small) trained with 2048 sequence length with sparse and dense attention blocks. We also provide Russian GPT-2 large model (ruGPT2Larg

Sberbank AI 1.6k Jan 05, 2023
Wake: Context-Sensitive Automatic Keyword Extraction Using Word2vec

Wake Wake: Context-Sensitive Automatic Keyword Extraction Using Word2vec Abstract استخراج خودکار کلمات کلیدی متون کوتاه فارسی با استفاده از word2vec ب

Omid Hajipoor 1 Dec 17, 2021
Entity Disambiguation as text extraction (ACL 2022)

ExtEnD: Extractive Entity Disambiguation This repository contains the code of ExtEnD: Extractive Entity Disambiguation, a novel approach to Entity Dis

Sapienza NLP group 121 Jan 03, 2023
This repository contains the code for "Generating Datasets with Pretrained Language Models".

Datasets from Instructions (DINO 🦕 ) This repository contains the code for Generating Datasets with Pretrained Language Models. The paper introduces

Timo Schick 154 Jan 01, 2023
Crowd sourced training data for Rasa NLU models

NLU Training Data Crowd-sourced training data for the development and testing of Rasa NLU models. If you're interested in grabbing some data feel free

Rasa 169 Dec 26, 2022