Learning Time-Critical Responses for Interactive Character Control

Overview

Learning Time-Critical Responses for Interactive Character Control

teaser

Abstract

This code implements the paper Learning Time-Critical Responses for Interactive Character Control. This system implements teacher-student framework to learn time-critically responsive policies, which guarantee the time-to-completion between user inputs and their associated responses regardless of the size and composition of the motion databases. This code is written in java and Python, based on Tensorflow2.

Publications

Kyungho Lee, Sehee Min, Sunmin Lee, and Jehee Lee. 2021. Learning Time-Critical Responses for Interactive Character Control. ACM Trans. Graph. 40, 4, 147. (SIGGRAPH 2021)

Project page: http://mrl.snu.ac.kr/research/ProjectAgile/Agile.html

Paper: http://mrl.snu.ac.kr/research/ProjectAgile/AGILE_2021_SIGGRAPH_author.pdf

Youtube: https://www.youtube.com/watch?v=rQKuvxg5ZHc

How to install

This code is implemented with Java and Python, and was developed using Eclipse on Windows. A Windows 64-bit environment is required to run the code.

Requirements

Install JDK 1.8

Java SE Development Kit 8 Downloads

Install Eclipse

Install Eclipse IDE for Java Developers

Install Python 3.6

https://www.python.org/downloads/release/python-368/

Install pydev to Eclipse

https://www.pydev.org/download.html

Install cuda and cudnn 10.0

CUDA Toolkit 10.0 Archive

NVIDIA cuDNN

Install Visual C++ Redistributable for VS2012

Laplacian Motion Editing(PmQmJNI.dll) is implemented in C++, and VS2012 is required to run it.

Visual C++ Redistributable for Visual Studio 2012 Update 4

Install JEP(Java Embedded Python)

Java Embedded Python

This library requires a part of the Visual Studio installation. I don't know exactly which ones are needed, but I'm guessing .net framework 3.5, VC++ 2015.3 v14.00(v140). Installing Visual Studio 2017 or later may be helpful.

Install Tensoflow 1.14.0

pip install tensorflow-gpu==1.14.0

Install this repository

We recommend downloading through Git in Eclipse environment.

  1. Open Git Perspective in Elcipse
  2. Paste repository url and clone repository ( 'https://git.ncsoft.net/scm/private_khlee/private-khlee-test.git' )
  3. Select all projects in Working Tree
  4. Right click and select Import Projects, and Import existing Eclipse projects.

Or you can just download the repository as Zip file and extract it, and import it using File->Import->General->Existing Projects into Workspace in Eclipse.

Install third party library

This code uses Interactive Character Animation by Learning Multi-Objective Control for learning the student policy.

Download required third pary library files(ThirdPartyDlls.zip) and extract it to mrl.motion.critical folder.

Dataset

The entire data used in the paper cannot be published due to copyright issues. This repository contains only minimal motion dataset for algorithm validation. SNU Motion Database was used for martial arts movements, CMU Motion Database was used for locomotion.

How to run

Eclipse

All of the instructions below are assumed to be executed based on Eclipse. Executable java files are grouped in package mrl.motion.critical.run of project mrl.motion.critical.

  • You can directly open source file with Ctrl+Shift+R
  • You can run the currently open source file with Ctrl+F11.
  • You can configure program arguments in Run->Run Configurations menu.

Pre-trained student policy

You can see the pre-trained network by running RuntimeMartialArtsControlModule.java. Pre-trained network file is located at mrl.python.neural\train\martial_arts_sp_da

  • 1, 2 : walk, run
  • 3,4,5,6 : martial arts actions
  • q,w,e,r,t : control critical response time

How to train

  1. Data Annotation & Configuration
    • You can check motion data list and annotation information by executing MAnnotationRun.java.
  2. Model Configuration
    • Action list, critical response time of each action, user input model and error metric is defined at MartialArtsConfig.java
  3. Preprocessing
    • You can precompute data table for pruning by executing DP_Preprocessing.java
    • The data file will be located at mrl.motion.critical\output\dp_cache
  4. Training teacher policy
    • You can train teacher policy by executing LearningTeacherPolicy.java
    • The result will be located at mrl.motion.critical\train_rl
  5. Training data for student policy
    • You can generate training data for student policy by executing StudentPolicyDataGeneration.java
    • The result will be located at mrl.python.neural\train
  6. Training student policy
    • You can train student policy by executing mrl.python.neural\train_rl.py
    • You need to set program arguments in Run->Run Configurations menu.
      • arguments format :
      • ex) martial_arts_sp new 0.0001
  7. Running student policy
    • You can see the trained student policy by running RuntimeMartialArtsControlModule.java.
    • This class will be load student policy located at mrl.python.neural\train.
Owner
Movement Research Lab
Our research group explores new ways of understanding, representing, and animating human movements.
Movement Research Lab
LSTC: Boosting Atomic Action Detection with Long-Short-Term Context

LSTC: Boosting Atomic Action Detection with Long-Short-Term Context This Repository contains the code on AVA of our ACM MM 2021 paper: LSTC: Boosting

Tencent YouTu Research 9 Oct 11, 2022
Official Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP2021)

TDEER 🦌 πŸ¦’ Official Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP2021) Overview TDEE

33 Dec 23, 2022
Build upon neural radiance fields to create a scene-specific implicit 3D semantic representation, Semantic-NeRF

Semantic-NeRF: Semantic Neural Radiance Fields Project Page | Video | Paper | Data In-Place Scene Labelling and Understanding with Implicit Scene Repr

Shuaifeng Zhi 243 Jan 07, 2023
Demo code for ICCV 2021 paper "Sensor-Guided Optical Flow"

Sensor-Guided Optical Flow Demo code for "Sensor-Guided Optical Flow", ICCV 2021 This code is provided to replicate results with flow hints obtained f

10 Mar 16, 2022
Official repository of the paper Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision

Official repository of the paper Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision

Soubhik Sanyal 689 Dec 25, 2022
ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs

ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs This is the code of paper ConE: Cone Embeddings for Multi-Hop Reasoning over Knowl

MIRA Lab 33 Dec 07, 2022
Python with OpenCV - MediaPip Framework Hand Detection

Python HandDetection Python with OpenCV - MediaPip Framework Hand Detection Explore the docs Β» Contact Me About The Project It is a Computer vision pa

2 Jan 07, 2022
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting (RVM) English | δΈ­ζ–‡ Official repository for the paper Robust High-Resolution Video Matting with Temporal Guidance. RVM is specific

flow-dev 2 Aug 21, 2022
Conversational text Analysis using various NLP techniques

PyConverse Let me try first Installation pip install pyconverse Usage Please try this notebook that demos the core functionalities: basic usage noteb

Rita Anjana 158 Dec 25, 2022
FB-tCNN for SSVEP Recognition

FB-tCNN for SSVEP Recognition Here are the codes of the tCNN and FB-tCNN in the paper "Filter Bank Convolutional Neural Network for Short Time-Window

Wenlong Ding 12 Dec 14, 2022
1st place solution in CCF BDCI 2021 ULSEG challenge

1st place solution in CCF BDCI 2021 ULSEG challenge This is the source code of the 1st place solution for ultrasound image angioma segmentation task (

Chenxu Peng 30 Nov 22, 2022
Geometric Algebra package for JAX

JAXGA - JAX Geometric Algebra GitHub | Docs JAXGA is a Geometric Algebra package on top of JAX. It can handle high dimensional algebras by storing onl

Robin Kahlow 36 Dec 22, 2022
Visual odometry package based on hardware-accelerated NVIDIA Elbrus library with world class quality and performance.

Isaac ROS Visual Odometry This repository provides a ROS2 package that estimates stereo visual inertial odometry using the Isaac Elbrus GPU-accelerate

NVIDIA Isaac ROS 343 Jan 03, 2023
UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering

UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering This repository holds all the code and data for our recent work on

Mohamed El Banani 118 Dec 06, 2022
JupyterLite demo deployed to GitHub Pages πŸš€

JupyterLite Demo JupyterLite deployed as a static site to GitHub Pages, for demo purposes. ✨ Try it in your browser ✨ ➑️ https://jupyterlite.github.io

JupyterLite 223 Jan 04, 2023
A simple and useful implementation of LPIPS.

lpips-pytorch Description Developing perceptual distance metrics is a major topic in recent image processing problems. LPIPS[1] is a state-of-the-art

So Uchida 121 Dec 24, 2022
This is the code repository for the paper "Identification of the Generalized Condorcet Winner in Multi-dueling Bandits" (NeurIPS 2021).

Code Repository for the Paper "Identification of the Generalized Condorcet Winner in Multi-dueling Bandits" (To appear in: Proceedings of NeurIPS20

1 Oct 03, 2022
yufan 81 Dec 08, 2022
Cancer-and-Tumor-Detection-Using-Inception-model - In this repo i am gonna show you how i did cancer/tumor detection in lungs using deep neural networks, specifically here the Inception model by google.

Cancer-and-Tumor-Detection-Using-Inception-model In this repo i am gonna show you how i did cancer/tumor detection in lungs using deep neural networks

Deepak Nandwani 1 Jan 01, 2022
FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization

FuseDream This repo contains code for our paper (paper link): FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimizat

XCL 191 Dec 31, 2022