ICON: Implicit Clothed humans Obtained from Normals (CVPR 2022)

Overview

ICON: Implicit Clothed humans Obtained from Normals

Yuliang Xiu · Jinlong Yang · Dimitrios Tzionas · Michael J. Black

CVPR 2022

Logo


PyTorch Lightning

Paper PDF Project Page Youtube Video Google Colab Discord Room



News 🚩


Table of Contents
  1. Who needs ICON
  2. TODO
  3. Installation
  4. Dataset Preprocess
  5. Demo
  6. Citation
  7. Acknowledgments
  8. License
  9. Disclosure
  10. Contact


Who needs ICON?

  • Given an RGB image, you could get:
    • image (png): segmentation, normal images (body + cloth), overlap result (rgb + normal)
    • mesh (obj): SMPL-(X) body, reconstructed clothed human
    • video (mp4): self-rotated clothed human
Intermediate Results
ICON's intermediate results
Final ResultsFinal Results
ICON's normal prediction + reconstructed mesh (w/o & w/ smooth)
  • If you want to create a realistic and animatable 3D clothed avatar direclty from video / sequential images
    • fully-textured with per-vertex color
    • can be animated by SMPL pose parameters
    • natural pose-dependent clothing deformation
ICON+SCANimate+AIST++
3D Clothed Avatar, created from 400+ images using ICON+SCANimate, animated by AIST++

TODO

  • testing code and pretrained models (*self-implemented version)
    • ICON (w/ & w/o global encoder, w/ PyMAF/HybrIK/PIXIE/PARE as HPS)
    • PIFu* (RGB image + predicted normal map as input)
    • PaMIR* (RGB image + predicted normal map as input, w/ PyMAF/PARE as HPS)
  • colab notebook Google Colab
  • dataset processing pipeline
  • training and evaluation codes
  • Video-to-Avatar module

Installation

Please follow the Installation Instruction to setup all the required packages, extra data, and models.

Dataset Preprocess

Please follow the Data Preprocess Instruction to generate the train/val/test dataset from raw scans (THuman2.0).

Demo

cd ICON/apps

# PIFu* (*: re-implementation)
python infer.py -cfg ../configs/pifu.yaml -gpu 0 -in_dir ../examples -out_dir ../results

# PaMIR* (*: re-implementation)
python infer.py -cfg ../configs/pamir.yaml -gpu 0 -in_dir ../examples -out_dir ../results

# ICON w/ global filter (better visual details --> lower Normal Error))
python infer.py -cfg ../configs/icon-filter.yaml -gpu 0 -in_dir ../examples -out_dir ../results -hps_type {pixie/pymaf/pare/hybrik}

# ICON w/o global filter (higher evaluation scores --> lower P2S/Chamfer Error))
python infer.py -cfg ../configs/icon-nofilter.yaml -gpu 0 -in_dir ../examples -out_dir ../results -hps_type {pixie/pymaf/pare/hybrik}

More Qualitative Results

Comparison
Comparison with other state-of-the-art methods
extreme
Predicted normals on in-the-wild images with extreme poses


Citation

@inproceedings{xiu2022icon,
  title={{ICON}: {I}mplicit {C}lothed humans {O}btained from {N}ormals},
  author={Xiu, Yuliang and Yang, Jinlong and Tzionas, Dimitrios and Black, Michael J.},
  booktitle={IEEE/CVF Conf.~on Computer Vision and Pattern Recognition (CVPR)},
  month = jun,
  year={2022}
}

Acknowledgments

We thank Yao Feng, Soubhik Sanyal, Qianli Ma, Xu Chen, Hongwei Yi, Chun-Hao Paul Huang, and Weiyang Liu for their feedback and discussions, Tsvetelina Alexiadis for her help with the AMT perceptual study, Taylor McConnell for her voice over, Benjamin Pellkofer for webpage, and Yuanlu Xu's help in comparing with ARCH and ARCH++.

Special thanks to Vassilis Choutas for sharing the code of bvh-distance-queries

Here are some great resources we benefit from:

Some images used in the qualitative examples come from pinterest.com.

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.860768 (CLIPE Project).

License

This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE.

Disclosure

MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB was a part-time employee of Amazon during this project, his research was performed solely at, and funded solely by, the Max Planck Society.

Contact

For more questions, please contact [email protected]

For commercial licensing, please contact [email protected]

Comments
  • OpenGL.raw.EGL._errors.EGLError: EGLError( )

    OpenGL.raw.EGL._errors.EGLError: EGLError( )

    When I run the command of "bash render_batch.sh debug", it gives an error as following

    OpenGL.raw.EGL._errors.EGLError: EGLError( err = EGL_NOT_INITIALIZED, baseOperation = eglInitialize, cArguments = ( <OpenGL._opaque.EGLDisplay_pointer object at 0x7f7b3d0ee2c0>, <importlib._bootstrap.LP_c_int object at 0x7f7b3d0ee440>, <importlib._bootstrap.LP_c_int object at 0x7f7b3d106bc0>, ), result = 0 )

    How can I fix this?

    documentation CUDA or OpenGL Dataset 
    opened by Yuhuoo 16
  • ConnectionError: HTTPSConnectionPool

    ConnectionError: HTTPSConnectionPool

    requests.exceptions.ConnectionError: HTTPSConnectionPool(host='drive.google.com', port=443): Max retries exceeded with url: /uc?id=1tCU5MM1LhRgGou5OpmpjBQbSrYIUoYab (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fbc4ad7deb0>: Failed to establish a new connection: [Errno 110] Connection timed out'))

    documentation rembg 
    opened by shuoshuoxu 12
  • Trouble getting ICON results

    Trouble getting ICON results

    After installing all packages, I got the results successfully for PIFu and PaMIR. I faced the runtime error when trying to get the ICON demo result. Could you guide what setting was wrong?

    $ python infer.py -cfg ../configs/icon-filter.yaml -gpu 0 -in_dir ../examples -out_dir ../results
    
    Traceback (most recent call last):
      File "infer.py", line 304, in <module>
        verts_pr, faces_pr, _ = model.test_single(in_tensor)
      File "./ICON/apps/ICON.py", line 738, in test_single
        sdf = self.reconEngine(opt=self.cfg,
      File "./.virtualenvs/icon/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "../lib/common/seg3d_lossless.py", line 148, in forward
        return self._forward_faster(**kwargs)
      File "../lib/common/seg3d_lossless.py", line 170, in _forward_faster
        occupancys = self.batch_eval(coords, **kwargs)
      File "../lib/common/seg3d_lossless.py", line 139, in batch_eval
        occupancys = self.query_func(**kwargs, points=coords2D)
      File "../lib/common/train_util.py", line 338, in query_func
        preds = netG.query(features=features,
      File "../lib/net/HGPIFuNet.py", line 285, in query
        smpl_sdf, smpl_norm, smpl_cmap, smpl_ind = cal_sdf_batch(
      File "../lib/dataset/mesh_util.py", line 231, in cal_sdf_batch
        residues, normals, pts_cmap, pts_ind = func(
      File "./.virtualenvs/icon/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "./.virtualenvs/icon/lib/python3.8/site-packages/bvh_distance_queries/mesh_distance.py", line 79, in forward
        output = self.search_tree(triangles, points)
      File "./.virtualenvs/icon/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "./.virtualenvs/icon/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
        return func(*args, **kwargs)
      File "./.virtualenvs/icon/lib/python3.8/site-packages/bvh_distance_queries/bvh_search_tree.py", line 109, in forward
        output = BVHFunction.apply(
      File "./.virtualenvs/icon/lib/python3.8/site-packages/bvh_distance_queries/bvh_search_tree.py", line 42, in forward
        outputs = bvh_distance_queries_cuda.distance_queries(
    RuntimeError: after reduction step 1: cudaErrorInvalidDevice: invalid device ordinal
    
    CUDA or OpenGL 
    opened by Samiepapa 12
  • THuman Dataset preprocess

    THuman Dataset preprocess

    Hi, I found the program was running so slowly when I ran bash render_batch.sh debug all, I figured out it stopped at hits = mesh.ray.intersects_any (origins + delta * normals, vectors), and the number of rays is the millions, is that the reason why it was too slow?

    documentation Dataset 
    opened by mmmcn 11
  • Colab main cell doing nothing

    Colab main cell doing nothing

    Hi, first thanks for your code In colab version, when running the cell that starts with

    run the test on examples

    the execution is very fast, no errors, but do nothing. Next cell show errors: FileNotFoundError: [Errno 2] No such file or directory: '/content/ICON/results/icon-filter/vid/22097467bffc92d4a5c4246f7d4edb75_display.mp4'

    many thanks

    opened by smithee77 9
  • Error: undefined symbol: _ZNSt15__exception_ptr13exception_ptr10_M_releaseEv

    Error: undefined symbol: _ZNSt15__exception_ptr13exception_ptr10_M_releaseEv

    Getting this error when installing locally on my workstation via colab bash script.

    .../ICON/pytorch3d/pytorch3d/_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZNSt15__exception_ptr13exception_ptr10_M_releaseEv

    This after installing pytorch3d locally as recommended. Conda has too many conflicts and never resolves.

    Installing torch through pip works (1.8.2+cu111) up until the next steps of infer.py because bvh_distance_queries only supports cuda 11.0. This would most likely require compiling against 11.0, but it will probably lead to more errors as I don't know what this repository's dependencies require as far as torch goes.

    CUDA or OpenGL 
    opened by ExponentialML 9
  • The side face seems not good in training phase.

    The side face seems not good in training phase.

    I followed the instruction to process the THuman2.0 dataset and train the ICON. After 10 epochs, the result on tensorboard seems not good especially the reconstruction of the side face. I can't find where the problem is. image image

    The change of loss during training is

    微信图片_20221124141912

    opened by River-Zhang 7
  • Question for Reproducing Result

    Question for Reproducing Result

    Hi, I reproduced ICON and trained with THuman2.0.

    As a result of the my training, only this model produces a lot of noise and isolated mesh.

    Would you give me an advice?

    I used [-1, 1] SMPL vertices, query points, and 15.0 SDF clipping.

    I believe that ICON has great potential and generalization performance.

    But I don't know what is the problem.

    The results below are about unseen data, and I used pre-trained GCMR just like PaMIR for SMPL prediction.

    snapshot00 snapshot01 snapshot02 snapshot03 snapshot04 snapshot05

    Thank you.

    opened by EadCat 7
  • Problem about SMPL refining loss.

    Problem about SMPL refining loss.

    Many thanks to the author for his work. I found a question in the process of reading papers and code. The paper introduces in the Refining SMPL section that the results of SMPL modeling can be iteratively optimized during the inference process. The loss function includes two parts, the L1 difference between the unclothed normal map and the normal map of the model prediction results, and The L1 difference between the mask of the smpl normal map and the mask of the original image, but you did not implement the corresponding implementation in the code. What is the reason for this? Is the existing code implementation more efficient than the original implementation?

                # silhouette loss
                smpl_arr = torch.cat([T_mask_F, T_mask_B], dim=-1)[0]       # smpl mask maps
                gt_arr = torch.cat(                                         # clothed normal maps
                    [in_tensor['normal_F'][0], in_tensor['normal_B'][0]],
                    dim=2).permute(1, 2, 0)
                gt_arr = ((gt_arr + 1.0) * 0.5).to(device)
                bg_color = torch.Tensor([0.5, 0.5,
                                         0.5]).unsqueeze(0).unsqueeze(0).to(device)
                gt_arr = ((gt_arr - bg_color).sum(dim=-1) != 0.0).float()
                diff_S = torch.abs(smpl_arr - gt_arr)
                losses['silhouette']['value'] = diff_S.mean()
    
    HPS Discussion 
    opened by SongYupei 7
  • Some questions about training

    Some questions about training

    I would like to know some details about training. Is the Ground-Truth SMPL or the predicted SMPL used in training ICON? Also, what about normal images? According to my understanding of the paper and practice, ICON should train the normal network first and then train the implicit reconstruction network. When I reproduce ICON, I don't know whether to choose the Ground-Truth or the predicted data for SMPL model and normal images, respectively.

    Dataset Training 
    opened by sunjc0306 7
  • ModuleNotFoundError: No module named 'bvh_distance_queries_cuda'

    ModuleNotFoundError: No module named 'bvh_distance_queries_cuda'

    Hi, thank you so much for the wonderful work and corresponding codes. I am facing the following issue: https://github.com/YuliangXiu/ICON/blob/0045bd10f076bf367d25b7dac41d0d5887b8694f/lib/bvh-distance-queries/bvh_distance_queries/bvh_search_tree.py#L27

    Is there any .py file called bvh_distance_queries_cuda ? Please let me know a possible solution. Thank you for your effort and help :) :) :)

    CUDA or OpenGL 
    opened by Pallab38 7
  • Question about training

    Question about training

    After training of the implicit MLP, I got quite wired results. The reconstructed meshes are poor. 0_nc The evaluation results shows that NC is very low, but chamfer and p2s are very high. eval Do you know where the problem is? I would appreciate it a lot if you could give me some suggestions!

    opened by Zhangjzh 2
  • Question of  cloth refinement

    Question of cloth refinement

    Hi, I'm confused about local_affine_model in the cloth refinement. I find that you use the LocalAffine() converting the verts of the refinemesh to the affined verts. The step seems to train a Affinemodel. I know that we need get the loss between P_normal from the mesh and normal of the image to optimizer the verts, but I don't understand why we need introduct the Affinemodel. The procedure of localAffine is meaning getting the Affine Matrix under the Camera coordinate system? I confused about these.

    opened by Yuhuoo 1
  • Can't setup ICON on Ubuntu / Colab

    Can't setup ICON on Ubuntu / Colab

    Unfortunately, I can't manage to install ICON locally. The Colab Notebook and the model on Huggingface also seem to be broken. Would it be possible to update the documentation / environment or share a Docker file?

    opened by c6s0 4
  • how to get real depth value from depth map

    how to get real depth value from depth map

    Hi, recently I've been trying to use depth map as prior information, but how can I generate the point cloud at camera coordinate space or model space from the given depth map(generated by render_single.py) ? For example, how can i generate point cloud in camera/view space by depth_F/xxx.png below, i.e. how can i get the real depth value in camera space. I tried to see the coordinate transformation in the code, it seems that I can get perspective matrix and model_view matrix in lib/render/camer.py , which may transform the model from local space into clip space. But I am still confused about how to convert the depth map to another space.

    It would be very appricated if anyone could give me some advice!

    opened by mmmcn 2
  • scripts/render_single.sh: line 33: 137550 Killed

    scripts/render_single.sh: line 33: 137550 Killed

    ubuntu 22.04 + NVIDIA 2080Ti.

    I encountered this error when execute this script "bash scripts/render_batch.sh debug all".

    thuman2 START----------
    Debug renderer
    Rendering thuman2 0001
    /home/hcp/anaconda3/envs/metaverse/lib/python3.8/site-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.23.3
      warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"
    
    **scripts/render_single.sh: line 33: 137550 Killed                  python $PYTHON_SCRIPT -s $SUBJECT -o $SAVE_DIR -r $NUM_VIEWS -w $SIZE**
    thuman2 END----------
    
    opened by hcp6897 1
  • A weird bug: The data is not on the same device.

    A weird bug: The data is not on the same device.

    https://github.com/YuliangXiu/ICON/blob/ece5a09aa2d56aec28017430e65a0352622a0f30/lib/dataset/mesh_util.py#L283

    ` print(triangles.device) # cuda:1

    print(points.device) # cuda:1

    residues, pts_ind, _ = point_to_mesh_distance(points, triangles)

    print(triangles.device) # cuda:1

    print(pts_ind.device) # cuda:0

    print(residues.device) # cuda:0`

    command: python -m apps.infer -cfg ./configs/icon-filter.yaml -gpu 1 -in_dir {*} -out_dir {*} 'CUDA_VISIBLE_DEVICES=1' doesn't work either.

    CUDA or OpenGL 
    opened by caiyongqi 11
Releases(v.1.1.0)
  • v.1.1.0(Aug 5, 2022)

    Important updates:

    • Support normal network training
    • New interactive demo deployed on HuggingFace space
    • Faster Google Colab environment setup
    • Improved clothing refinement module
    • Fix several bugs and refactor some messy functions
    Source code(tar.gz)
    Source code(zip)
  • v.1.0.0(Jun 15, 2022)

    The first stable version (ICON v.1.0.0) comes out!

    • Dataset: support THuman2.0
    • Inference: support PyMAF, PIXIE, PARE, HybrIK, BEV
    • Training & Evaluation: support PIFu, PaMIR, ICON
    • Add-on: garment extraction from fashion images
    Source code(tar.gz)
    Source code(zip)
  • v.1.0.0-rc2(Mar 7, 2022)

    Some updates:

    • HPS support: PyMAF (SMPL), PARE (SMPL), PIXIE (SMPL-X)
    • Google Colab support
    • Replace bvh-distance-queries with PyTorch3D and Kaolin to improve CUDA compatibility
    • Fix some issues
    Source code(tar.gz)
    Source code(zip)
  • v.1.0.0-rc1(Jan 30, 2022)

    First commit of ICON:

    • image-based inference code
    • pretrained model of ICON, PIFu*, PaMIR* (*: self-implementation)
    • homepage: https://icon.is.tue.mpg.de
    Source code(tar.gz)
    Source code(zip)
Owner
Yuliang Xiu
Ph.D. Student in Graphics & Vision, 3D Virtual Avatar Researcher, Play with pixels and voxels.
Yuliang Xiu
To provide 100 JAX exercises over different sections structured as a course or tutorials to teach and learn for beginners, intermediates as well as experts

JaxTon 💯 JAX exercises Mission 🚀 To provide 100 JAX exercises over different sections structured as a course or tutorials to teach and learn for beg

Rohan Rao 512 Jan 01, 2023
Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorch

Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorch

Phil Wang 383 Jan 02, 2023
Dataset used in "PlantDoc: A Dataset for Visual Plant Disease Detection" accepted in CODS-COMAD 2020

PlantDoc: A Dataset for Visual Plant Disease Detection This repository contains the Cropped-PlantDoc dataset used for benchmarking classification mode

Pratik Kayal 109 Dec 29, 2022
BMW TechOffice MUNICH 148 Dec 21, 2022
Generate images from texts. In Russian

ruDALL-E Generate images from texts pip install rudalle==1.1.0rc0 🤗 HF Models: ruDALL-E Malevich (XL) ruDALL-E Emojich (XL) (readme here) ruDALL-E S

AI Forever 1.6k Dec 31, 2022
Image Lowpoly based on Centroid Voronoi Diagram via python-opencv and taichi

CVTLowpoly: Image Lowpoly via Centroid Voronoi Diagram Image Sharp Feature Extraction using Guide Filter's Local Linear Theory via opencv-python. The

Pupa 4 Jul 29, 2022
TorchDistiller - a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

This project is a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and i

yifan liu 147 Dec 03, 2022
WORD: Revisiting Organs Segmentation in the Whole Abdominal Region

WORD: Revisiting Organs Segmentation in the Whole Abdominal Region. This repository provides the codebase and dataset for our work WORD: Revisiting Or

Healthcare Intelligence Laboratory 71 Jan 07, 2023
RARA: Zero-shot Sim2Real Visual Navigation with Following Foreground Cues

RARA: Zero-shot Sim2Real Visual Navigation with Following Foreground Cues FGBG (foreground-background) pytorch package for defining and training model

Klaas Kelchtermans 1 Jun 02, 2022
🍅🍅🍅YOLOv5-Lite: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 1.7M (int8) and 3.3M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320×320~

YOLOv5-Lite:lighter, faster and easier to deploy Perform a series of ablation experiments on yolov5 to make it lighter (smaller Flops, lower memory, a

pogg 1.5k Jan 05, 2023
NCVX (NonConVeX): A User-Friendly and Scalable Package for Nonconvex Optimization in Machine Learning.

NCVX NCVX: A User-Friendly and Scalable Package for Nonconvex Optimization in Machine Learning. Please check https://ncvx.org for detailed instruction

SUN Group @ UMN 28 Aug 03, 2022
PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models Code accompanying CVPR'20 paper of the same title. Paper lin

Alex Damian 7k Dec 30, 2022
PyTorch implementation of Deformable Convolution

Deformable Convolutional Networks in PyTorch This repo is an implementation of Deformable Convolution. Ported from author's MXNet implementation. Buil

411 Dec 16, 2022
PyTorch wrappers for using your model in audacity!

audacitorch This package contains utilities for prepping PyTorch audio models for use in Audacity. More specifically, it provides abstract classes for

Hugo Flores García 130 Dec 14, 2022
This repo contains the official code of our work SAM-SLR which won the CVPR 2021 Challenge on Large Scale Signer Independent Isolated Sign Language Recognition.

Skeleton Aware Multi-modal Sign Language Recognition By Songyao Jiang, Bin Sun, Lichen Wang, Yue Bai, Kunpeng Li and Yun Fu. Smile Lab @ Northeastern

Isen (Songyao Jiang) 128 Dec 08, 2022
AdaMML: Adaptive Multi-Modal Learning for Efficient Video Recognition

AdaMML: Adaptive Multi-Modal Learning for Efficient Video Recognition [ArXiv] [Project Page] This repository is the official implementation of AdaMML:

International Business Machines 43 Dec 26, 2022
Official project repository for 'Normality-Calibrated Autoencoder for Unsupervised Anomaly Detection on Data Contamination'

NCAE_UAD Official project repository of 'Normality-Calibrated Autoencoder for Unsupervised Anomaly Detection on Data Contamination' Abstract In this p

Jongmin Andrew Yu 2 Feb 10, 2022
GyroSPD: Vector-valued Distance and Gyrocalculus on the Space of Symmetric Positive Definite Matrices

GyroSPD Code for the paper "Vector-valued Distance and Gyrocalculus on the Space of Symmetric Positive Definite Matrices" accepted at NeurIPS 2021. Re

Federico Lopez 12 Dec 12, 2022
Code for the paper "Graph Attention Tracking". (CVPR2021)

SiamGAT 1. Environment setup This code has been tested on Ubuntu 16.04, Python 3.5, Pytorch 1.2.0, CUDA 9.0. Please install related libraries before r

122 Dec 24, 2022