ROMP: Monocular, One-stage, Regression of Multiple 3D People, ICCV21

Overview

Monocular, One-stage, Regression of Multiple 3D People

Google Colab demo arXiv PWC

ROMP, accepted by ICCV 2021, is a concise one-stage network for multi-person 3D mesh recovery from a single image.

  • Simple. Concise one-stage framework for simultaneous person detection and 3D body mesh recovery.

  • Fast. ROMP can achieve real-time inference on a 1070Ti GPU.

  • Strong. ROMP achieves superior performance on multiple challenging multi-person/occlusion benchmarks.

  • Easy to use. We provide user friendly testing API and webcam demos.

Contact: [email protected]. Feel free to contact me for related questions or discussions! arXiv paper.

Table of contents

Features

  • Running the examples on Google Colab.
  • Real-time online multi-person webcam demo for driving textured SMPL model. We also provide a wardrobe for changing clothes.
  • Batch processing images/videos via command line / jupyter notebook / calling ROMP as a python lib.
  • Exporting the captured single-person motion to FBX file for Blender/Unity usage.
  • Training and evaluation for re-implementing our results presented in paper.
  • Convenient API for 2D / 3D visualization, parsed datasets.

News

2021/12/2: Add optional renderers (pyrender or pytorch3D). Fix some bugs reported in issues.
2021/10/10: V1.1 released, including multi-person webcam, extracting , webcam temporal optimization, live blender character animation, interactive visualization. Let's try!
2021/9/13: Low FPS / args parsing bugs are fixed. Support calling as a python lib.
2021/9/10: Training code release. API optimization.
Old logs

Getting started

Try on Google Colab

It allows you to run the project in the cloud, free of charge. Let's give the prepared Google Colab demo a try.

Installation

Please refer to install.md for installation.

Inference

Currently, we support processing images, video or real-time webcam.
Pelease refer to config_guide.md for configurations.
ROMP can be called as a python lib inside the python code, jupyter notebook, or from command line / scripts, please refer to Google Colab demo for examples.

Processing images

To re-implement the demo results, please run

cd ROMP
# change the `inputs` in configs/image.yml to /path/to/your/image folder, then run 
sh scripts/image.sh
# or run the command like
python -m romp.predict.image --inputs=demo/images --output_dir=demo/image_results

Please refer to config_guide.md for saving the estimated mesh/Center maps/parameters dict.

For interactive visualization, please run

python -m romp.predict.image --inputs=demo/images --output_dir=demo/image_results --show_mesh_stand_on_image  --interactive_vis

Caution: To use show_mesh_stand_on_image and interactive_vis, you must run ROMP on a computer with visual desktop to support the rendering. Most remote servers without visual desktop is not supported. Please use save_visualization_on_img instead.

Here, we show an example of calling ROMP as a python lib to process images.

click here to show the code

```bash
# set the absolute path to ROMP
path_to_romp = '/path/to/ROMP'
import os,sys
sys.path.append(path_to_romp)
# set the detailed configurations
from romp.lib.config import ConfigContext, parse_args, args
ConfigContext.parsed_args = parse_args(["--configs_yml=configs/image.yml",'--inputs=/path/to/images_folder', '--output_dir=/path/to/save/image_results', '--save_centermap', False]) # Be caution that setting the bool configs needs two elements, ['--config', True/False]
# import the ROMP image processor
from romp.predict.image import Image_processor
processor = Image_processor(args_set=args())
results_dict = processor.run(args().inputs) # you can change the args().inputs to other /path/to/images_folder
```

Processing videos

cd ROMP
python -m romp.predict.video --inputs=demo/videos/sample_video.mp4 --output_dir=demo/sample_video_results --save_visualization_on_img --save_dict_results

# or you can set all configurations in configs/video.yml, then run 
sh scripts/video.sh

We notice that some users only want to extract the motion of the formost person, like this

To achieve this, please run
python -m romp.predict.video --inputs=demo/videos/demo_video_frames --output_dir=demo/demo_video_fp_results --show_largest_person_only --save_dict_results --show_mesh_stand_on_image 

All functions can be combined or work individually. Welcome to try them.

Here, we show an example of calling ROMP as a python lib to process videos.

click here to show the code

```bash
# set the absolute path to ROMP
path_to_romp = '/path/to/ROMP'
import os,sys
sys.path.append(path_to_romp)
# set the detailed configurations
from romp.lib.config import ConfigContext, parse_args, args
ConfigContext.parsed_args = parse_args(["--configs_yml=configs/video.yml",'--inputs=/path/to/video', '--output_dir=/path/to/save/video_results', '--save_visualization_on_img',False]) # Be caution that setting the bool configs needs two elements, ['--config', True/False]
# import the ROMP image processor
from romp.predict.video import Video_processor
processor = Video_processor(args_set=args())
results_dict = processor.run(args().inputs) # you can change the args().inputs to other /path/to/video
```

Webcam

To do this you just need to run:

cd ROMP
sh scripts/webcam.sh

To drive a character in Blender, please refer to expert.md.

Export

Export to Blender FBX

Please refer to expert.md to export the results to fbx files for Blender usage. Currently, this function only support the single-person video cases. Therefore, please test it with demo/videos/sample_video2_results/sample_video2.mp4, whose results would be saved to demo/videos/sample_video2_results.

Blender Addons

Chuanhang Yan : developing an addon for driving character in Blender.
VLT Media creates a QuickMocap-BlenderAddon to read the .npz file created by ROMP. Clean & smooth the resulting keyframes.

Train

Please prepare the training datasets following dataset.md, and then refer to train.md for training.

Evaluation

Please refer to evaluation.md for evaluation on benchmarks.

Bugs report

Please refer to bug.md for solutions. Welcome to submit the issues for related bugs. I will solve them as soon as possible.

Citation

@InProceedings{ROMP,
author = {Sun, Yu and Bao, Qian and Liu, Wu and Fu, Yili and Michael J., Black and Mei, Tao},
title = {Monocular, One-stage, Regression of Multiple 3D People},
booktitle = {ICCV},
month = {October},
year = {2021}
}

Contributor

This repository is currently maintained by Yu Sun.

ROMP has also benefited from many developers, including

Acknowledgement

We thank Peng Cheng for his constructive comments on Center map training.

Here are some great resources we benefit:

Please consider citing their papers.

Comments
  • 数据处理细节讨论

    数据处理细节讨论

    你好,我看你论文提到使用了Movi数据集,原始的Movi数据集只提供了3d以及AMASS通过fitting得到的关于SMPL-H的mesh参数。SMPL-H关于shape有16个值,pose有52个,我想知道你在使用Movi的数据集时候,是否使用了AMASS提供的pose和beta。我目前将beta的前10个值和pose前22个值导入到我们当前的SMPL里面,然后借助Movi提供的相机内参和外参以及参考Movi提供的教程,始终无法在我们这个程序中可视化成功。求指教。

    如果没有使用,是不是因为AMASS提供的beta和pose不适用我们这个SMPL模型,但是看AMASS论文好像是可以的。 MoVi-Toolbox:MoVi-Toolbox https://github.com/saeed1262/MoVi-Toolbox/blob/master/MoCap/utils.py

    按照Movi的教程以及将其mesh系数按照beta[0:10] pose[0:22] + 两个手部pose设置为0传入到我们当前的SMPL smpl_model_path = './centerHMR/models/smpl/SMPL_NEUTRAL.pkl' self.smplx = smpl_model.create(smpl_model_path, batch_size=self.batch_size,model_type=self.model_type, gender='neutral', use_face_contour=False, ext='npz', joint_mapper=joint_mapper,flat_hand_mean=True, use_pca=False).cuda()

    image

    @Arthur151

    question 
    opened by zhLawliet 34
  • Reproduce Result

    Reproduce Result

    Hi. Thanks your work. Could you show me your training log? I can't reproduce paper's results. This is my log file and yaml file. I only change the batch-size to 48 because of my memory. Anything else is default. hrnet_cm64_V1_hrnet.log hrnet_cm64_V1_hrnet_yml.log

    opened by panshaohua 28
  • A simple question about camera and coordinate system.

    A simple question about camera and coordinate system.

    Hi, I have a simple question about ROMP. I have been struggling putting people into their correct relative position, but is it really possible using the root-aligned SMPL meshes without predicting their transl? (And if we have camera param K, will it be possible? )

    1. What is the coordinate system of the vertices that are used for rendering? I think we are predicting camera coordinate system points but root-aligned, correct?

    2. Following Q1, before rendering verts onto image, there is a trans added to verts ('cam_trans in projection.py') What is it? and what is estimate_translation actually doing? Is this estimating root's position? https://github.com/Arthur151/ROMP/blob/e30b7d17f13089fa9fa114df494192e31b0f43ed/romp/lib/visualization/visualization.py#L61

    3. I tried to replace the verts +trans in Q2 with GT mesh, so verts=GT_verts, without any other changes to your code, but the results are not correct, I expect it to be fully matched the person on the image but there are always shifts, and I also can't use the same FOV otherwise it would be a very small mesh on the image.

    Sorry if I understand anything wrong. I think rendering is the final part I didn't understand in your code. Looking forward you for your answer!

    Zhengdi

    opened by ZhengdiYu 24
  • Can't run demo with provided instructions

    Can't run demo with provided instructions

    Hi there,

    Really awesome work. Unfortunately, it can't run with the provided instructions. Trying to sh run.sh results in multiple errors.

    First:

    ----------------
    Traceback (most recent call last):
      File "core/test.py", line 2, in <module>
        from base import *
      File "/home/jb/Documents/python/CenterHMR/src/core/base.py", line 20, in <module>
        from dataset.mixed_dataset import SingleDataset
      File "/home/jb/Documents/python/CenterHMR/src/dataset/mixed_dataset.py", line 5, in <module>
        from dataset.internet import Internet
      File "/home/jb/Documents/python/CenterHMR/src/dataset/internet.py", line 9, in <module>
        import smplx
    ModuleNotFoundError: No module named 'smplx'
    

    I installed smplx from here: https://github.com/vchoutas/smplx

    Next:

    ----------------
    In Ubuntu, using osmesa mode for rendering
    Traceback (most recent call last):
      File "/home/jb/anaconda3/envs/centerhmr/lib/python3.7/site-packages/OpenGL/platform/osmesa.py", line 25, in GL
        mode=ctypes.RTLD_GLOBAL 
      File "/home/jb/anaconda3/envs/centerhmr/lib/python3.7/site-packages/OpenGL/platform/ctypesloader.py", line 45, in loadLibrary
        return dllType( name, mode )
      File "/home/jb/anaconda3/envs/centerhmr/lib/python3.7/ctypes/__init__.py", line 364, in __init__
        self._handle = _dlopen(self._name, mode)
    OSError: ('OSMesa: cannot open shared object file: No such file or directory', 'OSMesa', None)
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "core/test.py", line 2, in <module>
        from base import *
      File "/home/jb/Documents/python/CenterHMR/src/core/base.py", line 21, in <module>
        from visualization.visualization import Visualizer
      File "/home/jb/Documents/python/CenterHMR/src/visualization/visualization.py", line 15, in <module>
        from .renderer import get_renderer
      File "/home/jb/Documents/python/CenterHMR/src/visualization/renderer.py", line 19, in <module>
        import pyrender
      File "/home/jb/anaconda3/envs/centerhmr/lib/python3.7/site-packages/pyrender/__init__.py", line 3, in <module>
        from .light import Light, PointLight, DirectionalLight, SpotLight
      File "/home/jb/anaconda3/envs/centerhmr/lib/python3.7/site-packages/pyrender/light.py", line 10, in <module>
        from OpenGL.GL import *
      File "/home/jb/anaconda3/envs/centerhmr/lib/python3.7/site-packages/OpenGL/GL/__init__.py", line 3, in <module>
        from OpenGL import error as _error
      File "/home/jb/anaconda3/envs/centerhmr/lib/python3.7/site-packages/OpenGL/error.py", line 12, in <module>
        from OpenGL import platform, _configflags
      File "/home/jb/anaconda3/envs/centerhmr/lib/python3.7/site-packages/OpenGL/platform/__init__.py", line 35, in <module>
        _load()
      File "/home/jb/anaconda3/envs/centerhmr/lib/python3.7/site-packages/OpenGL/platform/__init__.py", line 32, in _load
        plugin.install(globals())
      File "/home/jb/anaconda3/envs/centerhmr/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 92, in install
        namespace[ name ] = getattr(self,name,None)
      File "/home/jb/anaconda3/envs/centerhmr/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 14, in __get__
        value = self.fget( obj )
      File "/home/jb/anaconda3/envs/centerhmr/lib/python3.7/site-packages/OpenGL/platform/osmesa.py", line 66, in GetCurrentContext
        function = self.OSMesa.OSMesaGetCurrentContext
      File "/home/jb/anaconda3/envs/centerhmr/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 14, in __get__
        value = self.fget( obj )
      File "/home/jb/anaconda3/envs/centerhmr/lib/python3.7/site-packages/OpenGL/platform/osmesa.py", line 60, in OSMesa
        def OSMesa( self ): return self.GL
      File "/home/jb/anaconda3/envs/centerhmr/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 14, in __get__
        value = self.fget( obj )
      File "/home/jb/anaconda3/envs/centerhmr/lib/python3.7/site-packages/OpenGL/platform/osmesa.py", line 28, in GL
        raise ImportError("Unable to load OpenGL library", *err.args)
    ImportError: ('Unable to load OpenGL library', 'OSMesa: cannot open shared object file: No such file or directory', 'OSMesa', None)
    

    I had to install Mesa using the instructions from PyRender: https://pyrender.readthedocs.io/en/latest/install/

    Next:

    Traceback (most recent call last):
      File "core/test.py", line 50, in <module>
        main()
      File "core/test.py", line 46, in main
        demo.run(demo_image_folder)
      File "core/test.py", line 20, in run
        self.visualizer = Visualizer(model_type=self.model_type,resolution =vis_size, input_size=self.input_size, result_img_dir = test_save_dir,with_renderer=True)
      File "/home/jb/Documents/python/CenterHMR/src/visualization/visualization.py", line 23, in __init__
        self.renderer = get_renderer(model_type=model_type,resolution=self.resolution)
      File "/home/jb/Documents/python/CenterHMR/src/visualization/renderer.py", line 138, in get_renderer
        renderer = Renderer(faces,resolution=resolution[:2])
      File "/home/jb/Documents/python/CenterHMR/src/visualization/renderer.py", line 69, in __init__
        point_size=1.0)
      File "/home/jb/anaconda3/envs/centerhmr/lib/python3.7/site-packages/pyrender/offscreen.py", line 31, in __init__
        self._create()
      File "/home/jb/anaconda3/envs/centerhmr/lib/python3.7/site-packages/pyrender/offscreen.py", line 149, in _create
        self._platform.init_context()
      File "/home/jb/anaconda3/envs/centerhmr/lib/python3.7/site-packages/pyrender/platforms/osmesa.py", line 19, in init_context
        from OpenGL.osmesa import (
    ImportError: cannot import name 'OSMesaCreateContextAttribs' from 'OpenGL.osmesa' (/home/jb/anaconda3/envs/centerhmr/lib/python3.7/site-packages/OpenGL/osmesa/__init__.py)
    

    This one stumped me, couldn't get it to work. Maybe my Mesa installation is still not correct?

    bug 
    opened by jbohnslav 20
  • smpl_mesh_root_align

    smpl_mesh_root_align

    Hi, I notice that your ROMP_HRNet_32.pkl was trained on smpl_mesh_root_align=False. But in v1.yml, smpl_mesh_root_align is not set, so it's default value True.

    So My questions are:

    1. (Solved✔) Firstly I found my model perform having the same issue as resnet (mesh shift), then I found the reason: Image.yml is initially designed for ROMP_HRNet_32.pkl, which was trained on smpl_mesh_root_align=False. If we want to test on image using our model trained from pre-trained model using hrnet and v1.yml, the smpl_mesh_root_align in image.yml should also be set to True, just like resnet #106 . So this was solved.

    2. When should smpl_mesh_root_align be True or False? Why did you set it to True for v1.yml and resnet, although it's false for ROMP_HRNet_32.pkl? I think for 3D joints loss, it doesn't matter as long as we would do another alignment before calculating MPJPE/PAMPJPE. And for the 2D part, the weak camera parameters will be automatically learnt to project those 3D joints to align with GT_2d as long as it's consistent all the time. ~So the last question is:

    3. During fine-tuning from your model: ROMP_HRNet_32.pkl using v1_hrnet_3dpw_ft.yml. the smpl_mesh_root_align is also default value True, However, ROMP_HRNet_32.pkl was trained with smpl_mesh_root_align=True.

    As we know from question1: if we use different setting of smpl_mesh_root_align, the visualization will be shifted, I think this could be a problem for training and fine-tuning.

    And I tried to train with smpl_mesh_root_align from scratch, but it's ended up with error below:

    Traceback (most recent call last):
      File "/home2/rctv12/miniconda3/envs/ROMP/lib/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/home2/rctv12/miniconda3/envs/ROMP/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/home2/rctv12/projects/ROMP/multi-person/romp/train.py", line 148, in <module>
        main()
      File "/home2/rctv12/projects/ROMP/multi-person/romp/train.py", line 145, in main
        trainer.train()
      File "/home2/rctv12/projects/ROMP/multi-person/romp/train.py", line 33, in train
        self.train_epoch(epoch)
      File "/home2/rctv12/projects/ROMP/multi-person/romp/train.py", line 94, in train_epoch
        self.train_log_visualization(outputs, loss, run_time, data_time, losses, losses_dict, epoch, iter_index)
      File "/home2/rctv12/projects/ROMP/multi-person/romp/train.py", line 74, in train_log_visualization
        vis_cfg={'settings': ['save_img'], 'vids': vis_ids, 'save_dir':self.train_img_dir, 'save_name':save_name, 'verrors': [vis_errors], 'error_names':['E']})
      File "/home2/rctv12/projects/ROMP/multi-person/romp/lib/models/../utils/../visualization/visualization.py", line 102, in visulize_result
        rendered_imgs = self.visualize_renderer_verts_list(per_img_verts_list, images=org_imgs.copy(), trans=mesh_trans)
      File "/home2/rctv12/projects/ROMP/multi-person/romp/lib/models/../utils/../visualization/visualization.py", line 62, in visualize_renderer_verts_list
        rendered_img = self.renderer(verts, faces, colors=color, focal_length=args().focal_length, cam_params=cam_params)
      File "/home2/rctv12/projects/ROMP/multi-person/romp/lib/models/../utils/../visualization/renderer_pt3d.py", line 102, in __call__
        images = self.renderer(meshes)
      File "/home2/rctv12/miniconda3/envs/ROMP/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home2/rctv12/miniconda3/envs/ROMP/lib/python3.7/site-packages/pytorch3d/renderer/mesh/renderer.py", line 59, in forward
        fragments = self.rasterizer(meshes_world, **kwargs)
      File "/home2/rctv12/miniconda3/envs/ROMP/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home2/rctv12/miniconda3/envs/ROMP/lib/python3.7/site-packages/pytorch3d/renderer/mesh/rasterizer.py", line 168, in forward
        meshes_proj = self.transform(meshes_world, **kwargs)
      File "/home2/rctv12/miniconda3/envs/ROMP/lib/python3.7/site-packages/pytorch3d/renderer/mesh/rasterizer.py", line 147, in transform
        verts_world, eps=eps
      File "/home2/rctv12/miniconda3/envs/ROMP/lib/python3.7/site-packages/pytorch3d/transforms/transform3d.py", line 336, in transform_points
        points_out = _broadcast_bmm(points_batch, composed_matrix)
      File "/home2/rctv12/miniconda3/envs/ROMP/lib/python3.7/site-packages/pytorch3d/transforms/transform3d.py", line 753, in _broadcast_bmm
        return a.bmm(b)
    RuntimeError: expected scalar type Half but found Float
    

    I'm still debugging anyway.

    opened by ZhengdiYu 19
  • 作者您好,关于 root_align=True 和投影问题的请教您一下

    作者您好,关于 root_align=True 和投影问题的请教您一下

    作者您好,我是新研究这个领域的,有个困惑想请教您一下,请问设置 root_align=True 后,得到的是root-relative mesh吧,然后经过SMPL映射矩阵得到的3D pose 也是root-relative 3d pose吗?弱透视投影应该是将绝对的 3D human pose 投影到图片中与2D pose对齐的吧,应该是要加上root position的坐标吧? 那是怎么得出相机空间中绝对的 root position的?

    opened by Rookienovice 13
  • lsp与mpiinf数据集问题

    lsp与mpiinf数据集问题

    作者您好,您所提供的7个数据集中我有两个训练有问题,皆为FileNotFoundError。

    其中lsp的报错如下: Traceback (most recent call last): File "/home/omnisky/anaconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/omnisky/anaconda3/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/data01/wyjh/ROMP/romp/train.py", line 149, in main() File "/data01/wyjh/ROMP/romp/train.py", line 145, in main trainer = Trainer() File "/data01/wyjh/ROMP/romp/train.py", line 14, in init self.loader = self._create_data_loader(train_flag=True) File "/data01/wyjh/ROMP/romp/base.py", line 133, in _create_data_loader datasets = MixedDataset(train_flag=train_flag) File "/data01/wyjh/ROMP/romp/lib/models/../utils/../dataset/mixed_dataset.py", line 36, in init self.datasets = [dataset_dictds for ds in datasets_used] File "/data01/wyjh/ROMP/romp/lib/models/../utils/../dataset/mixed_dataset.py", line 36, in self.datasets = [dataset_dictds for ds in datasets_used] File "/data01/wyjh/ROMP/romp/lib/models/../utils/../dataset/lsp.py", line 11, in init self.load_data() File "/data01/wyjh/ROMP/romp/lib/models/../utils/../dataset/lsp.py", line 38, in load_data self.load_eft_annots(os.path.join(config.project_dir, 'data/eft_fit/LSPet_ver01.json')) File "/data01/wyjh/ROMP/romp/lib/models/../utils/../dataset/lsp.py", line 44, in load_eft_annots annots = json.load(open(annot_file_path,'r'))['data'] FileNotFoundError: [Errno 2] No such file or directory: '/data01/wyjh/ROMP/data/eft_fit/LSPet_ver01.json' (不知道为什么会去data里找这个文件,我并没有data这个路径)

    mpiinf的报错如下: Traceback (most recent call last): File "/home/omnisky/anaconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/omnisky/anaconda3/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/data01/wyjh/ROMP/romp/train.py", line 149, in main() File "/data01/wyjh/ROMP/romp/train.py", line 145, in main trainer = Trainer() File "/data01/wyjh/ROMP/romp/train.py", line 14, in init self.loader = self._create_data_loader(train_flag=True) File "/data01/wyjh/ROMP/romp/base.py", line 133, in _create_data_loader datasets = MixedDataset(train_flag=train_flag) File "/data01/wyjh/ROMP/romp/lib/models/../utils/../dataset/mixed_dataset.py", line 36, in init self.datasets = [dataset_dictds for ds in datasets_used] File "/data01/wyjh/ROMP/romp/lib/models/../utils/../dataset/mixed_dataset.py", line 36, in self.datasets = [dataset_dictds for ds in datasets_used] File "/data01/wyjh/ROMP/romp/lib/models/../utils/../dataset/mpi_inf_3dhp.py", line 17, in init self.pack_data(annots_file_path) File "/data01/wyjh/ROMP/romp/lib/models/../utils/../dataset/mpi_inf_3dhp.py", line 111, in pack_data annot2 = sio.loadmat(annot_file_path)['annot2'] File "/home/omnisky/wyj/lib/python3.7/site-packages/scipy/io/matlab/mio.py", line 224, in loadmat with _open_file_context(file_name, appendmat) as f: File "/home/omnisky/anaconda3/lib/python3.7/contextlib.py", line 112, in enter return next(self.gen) File "/home/omnisky/wyj/lib/python3.7/site-packages/scipy/io/matlab/mio.py", line 17, in _open_file_context f, opened = _open_file(file_like, appendmat, mode) File "/home/omnisky/wyj/lib/python3.7/site-packages/scipy/io/matlab/mio.py", line 45, in _open_file return open(file_like, mode), True FileNotFoundError: [Errno 2] No such file or directory: '/data01/wyjh/ROMP/romp/lib/dataset/mpi_inf_3dhp/S1/Seq1/annot.mat' (同样,mpi_inf_3dhp中也没有S1这个文件夹)

    我试着找了官网所提供的数据集发现并没有所缺内容,希望作者可以给我指点一下解决问题的方向。谢谢您!

    opened by Wyethjjj 13
  • OpenGL.error.GLError

    OpenGL.error.GLError

    Thanks for your work. I am running your demo with CUDA_VISIBLE_DEVICES=0 python core/test.py --gpu=0 --configs_yml=configs/single_image.yml But an error occurs. I am running with ubuntu16.04. The python verison is 3.6.9. How should I check what's happening?

    (romp) [email protected]:~/Documents/ROMP/src$ CUDA_VISIBLE_DEVICES=0 python core/test.py --configs_yml=configs/single_image.yml pygame 2.0.1 (SDL 2.0.14, Python 3.6.13) Hello from the pygame community. https://www.pygame.org/contribute.html INFO:root:{'tab': 'hrnet_cm64_single_image_test', 'configs_yml': 'configs/single_image.yml', 'demo_image_folder': '/path/to/image_folder', 'local_rank': 0, 'model_version': 1, 'multi_person': True, 'collision_aware_centermap': False, 'collision_factor': 0.2, 'kp3d_format': 'smpl24', 'eval': False, 'max_person': 64, 'input_size': 512, 'Rot_type': '6D', 'rot_dim': 6, 'centermap_conf_thresh': 0.25, 'centermap_size': 64, 'deconv_num': 0, 'model_precision': 'fp32', 'backbone': 'hrnet', 'gmodel_path': '../trained_models/ROMP_hrnet32.pkl', 'print_freq': 50, 'fine_tune': True, 'gpu': '0', 'batch_size': 64, 'val_batch_size': 1, 'nw': 4, 'calc_PVE_error': False, 'dataset_rootdir': '/home/jack/Documents/dataset/', 'high_resolution': True, 'save_best_folder': '/home/jack/Documents/checkpoints/', 'log_path': '/home/jack/Documents/log/', 'total_param_count': 85, 'smpl_mean_param_path': '/home/jack/Documents/ROMP/models/satistic_data/neutral_smpl_mean_params.h5', 'smpl_model': '/home/jack/Documents/ROMP/models/statistic_data/neutral_smpl_with_cocoplus_reg.txt', 'smplx_model': True, 'cam_dim': 3, 'beta_dim': 10, 'smpl_joint_num': 22, 'smpl_model_path': '/home/jack/Documents/ROMP/models', 'smpl_J_reg_h37m_path': '/home/jack/Documents/ROMP/models/smpl/J_regressor_h36m.npy', 'smpl_J_reg_extra_path': '/home/jack/Documents/ROMP/models/smpl/J_regressor_extra.npy', 'kernel_sizes': [5], 'GPUS': 0, 'use_coordmaps': True, 'webcam': False, 'video_or_frame': False, 'save_visualization_on_img': True, 'output_dir': '/path/to/outputdir', 'save_mesh': True, 'save_centermap': True, 'save_dict_results': True, 'multiprocess': False} INFO:root:------------------------------------------------------------------ INFO:root:start building model. Using ROMP v1 INFO:root:using fine_tune model: ../trained_models/ROMP_hrnet32.pkl INFO:root:finished build model. Traceback (most recent call last): File "core/test.py", line 225, in main() File "core/test.py", line 205, in main demo = Demo() File "core/test.py", line 7, in init self.prepare_modules() File "core/test.py", line 14, in prepare_modules self.visualizer = Visualizer(resolution=self.vis_size, input_size=self.input_size,with_renderer=True) File "/home/jack/Documents/ROMP/src/core/../lib/models/../utils/../maps_utils/../dataset/../dataset/../dataset/../visualization/visualization.py", line 23, in init self.renderer = get_renderer(resolution=resolution) File "/home/jack/Documents/ROMP/src/core/../lib/models/../utils/../maps_utils/../dataset/../dataset/../dataset/../visualization/../visualization/renderer.py", line 142, in get_renderer renderer = Renderer(faces,resolution=resolution[:2]) File "/home/jack/Documents/ROMP/src/core/../lib/models/../utils/../maps_utils/../dataset/../dataset/../dataset/../visualization/../visualization/renderer.py", line 72, in init point_size=1.0) File "/home/jack/anaconda3/envs/romp/lib/python3.6/site-packages/pyrender/offscreen.py", line 31, in init self._create() File "/home/jack/anaconda3/envs/romp/lib/python3.6/site-packages/pyrender/offscreen.py", line 149, in _create self._platform.init_context() File "/home/jack/anaconda3/envs/romp/lib/python3.6/site-packages/pyrender/platforms/egl.py", line 188, in init_context EGL_NO_CONTEXT, context_attributes File "/home/jack/anaconda3/envs/romp/lib/python3.6/site-packages/OpenGL/platform/baseplatform.py", line 402, in call return self( *args, **named ) File "/home/jack/anaconda3/envs/romp/lib/python3.6/site-packages/OpenGL/error.py", line 232, in glCheckError baseOperation = baseOperation, OpenGL.error.GLError: GLError( err = 12297, baseOperation = eglCreateContext, cArguments = ( <OpenGL._opaque.EGLDisplay_pointer object at 0x7ff367d4e268>, <OpenGL._opaque.EGLConfig_pointer object at 0x7ff367d4e1e0>, <OpenGL._opaque.EGLContext_pointer object at 0x7ff367e84d08>, <OpenGL.arrays.lists.c_int_Array_7 object at 0x7ff367e64d08>, ), result = <OpenGL._opaque.EGLContext_pointer object at 0x7ff367d311e0> )

    opened by NoLookDefense 13
  • About bpy environment

    About bpy environment

    Hey bro. I am coming again for the another problem. I want to run the new feature which can export the mesh to .fbx file, and it seems that "bpy" package is needed to download. But when I use pip install bpy, some CMake error always occurs. I wonder that which version of CMake you are using when you install bpy?

    opened by NoLookDefense 12
  • How to render results with weak perspective camera

    How to render results with weak perspective camera

    Thanks a lot for this great and easy-to-use repo!

    I'm trying to render the results using the weak perspective camera model. My question relates to these issues:

    • https://github.com/Arthur151/ROMP/issues/134
    • https://github.com/Arthur151/ROMP/issues/241
    • https://github.com/Arthur151/ROMP/issues/300

    However none of these issues gave me the answer I was looking for. I am using the weak perspective camera parameters stored in cam and as suggested in this issue I multiply them with 2. I also pad the image to be square as mentioned here. I then convert the weak perspective camera model to a projection matrix the same way I used to do it for VIBE, which worked well there. However, for the ROMP output I'm still getting a slight misalignment, as you can see in the following screenshot. The light model is what I am rendering and the blue model in the background is the visualization output from ROMP. I think it's because I should somehow account for cam_trans but I don't know how exactly. Can you help me with this?

    image

    opened by kaufManu 11
  • How to convert model to pth?

    How to convert model to pth?

    Can I convert .pkl file to .pth? For further conversion to ptl?

    I tried to convert using torch.jit.script, but I have error

    Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults
    
    opened by nikkorejz 11
  • 你好,我想只输出3D位置显示,该如何处理输出结果?

    你好,我想只输出3D位置显示,该如何处理输出结果?

    1. 请问大佬,如何绑定真实世界坐标系 image https://github.com/Arthur151/ROMP/issues/372#issuecomment-1345247684,看了这个回复,还是不太懂,如何实现
    2. 当我镜头里只有两个人时候会出现缩放现象(人物比例不一,就像镜头拉近了一样),这个如何消除。无论几个人我只想保持原始比例不变。 image
    opened by zhanghongyong123456 6
  • No such file or directory:ROMP/model_data/parameters/J_regressor_extra.npy

    No such file or directory:ROMP/model_data/parameters/J_regressor_extra.npy

    Hello! An error occurred while running the following code. python -m romp.predict.image --inputs=demo/images --output_dir=demo/image_results --show_mesh_stand_on_image --interactive_vis Can't find ROMP/model_data/parameters/J_regressor_extra.npy.

    opened by Hrforeverqqqqqq 3
  • Issues training with CMU_Panoptic

    Issues training with CMU_Panoptic

    Hello,

    1. I am trying to train the model starting from pretrained resent on the cmu_panoptic dataset. However, I get the following error:
    Traceback (most recent call last):
      File "HumanObj_videos_ResNet/train.py", line 277, in <module>
        main()
      File "HumanObj_videos_ResNet/train.py", line 273, in main
        trainer.train()
      File "HumanObj_videos_ResNet/train.py", line 77, in train
        self.train_epoch(epoch)
      File "HumanObj_videos_ResNet/train.py", line 192, in train_epoch
        for iter_index, meta_data in enumerate(self.loader):
      File "/z/home/mkhoshle/env/romp2/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
        data = self._next_data()
      File "/z/home/mkhoshle/env/romp2/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
        return self._process_data(data)
      File "/z/home/mkhoshle/env/romp2/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
        data.reraise()
      File "/z/home/mkhoshle/env/romp2/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise
        raise exception
    ValueError: Caught ValueError in DataLoader worker process 0.
    Original Traceback (most recent call last):
      File "/z/home/mkhoshle/env/romp2/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
        data = fetcher.fetch(index)
      File "/z/home/mkhoshle/env/romp2/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
        data = [self.dataset[idx] for idx in possibly_batched_index]
      File "/z/home/mkhoshle/env/romp2/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
        data = [self.dataset[idx] for idx in possibly_batched_index]
      File "/z/home/mkhoshle/Human_object_transform/HumanObj_videos_ResNet/lib/dataset/mixed_dataset.py", line 79, in __getitem__
        annots = self.datasets[dataset_id][index_sample]
      File "/z/home/mkhoshle/Human_object_transform/HumanObj_videos_ResNet/lib/dataset/image_base.py", line 375, in __getitem__
        return self.get_item_single_frame(index)
      File "/z/home/mkhoshle/Human_object_transform/HumanObj_videos_ResNet/lib/dataset/image_base.py", line 123, in get_item_single_frame
        kp3d, valid_masks[:,1] = self.process_kp3ds(info['kp3ds'], used_person_inds, \
      File "/z/home/mkhoshle/Human_object_transform/HumanObj_videos_ResNet/lib/dataset/image_base.py", line 284, in process_kp3ds
        kp3d_processed[inds] = kp3d
    ValueError: could not broadcast input array from shape (17,3) into shape (54,3)
    

    Do you know what I need to do to avoid this error?

    1. Also, does the cmu_panoptic have the 2d pose annotation for all the people appearing in every image?

    I would appreciate it if you could help me with this, Thanks,

    opened by mkhoshle 5
Releases(V2.1)
Owner
Yu Sun
I am a Ph.D. student at HIT, an intern at JDAI-CV, working on monocular 3D human mesh recovery.
Yu Sun
Codes for ACL-IJCNLP 2021 Paper "Zero-shot Fact Verification by Claim Generation"

Zero-shot-Fact-Verification-by-Claim-Generation This repository contains code and models for the paper: Zero-shot Fact Verification by Claim Generatio

Liangming Pan 47 Jan 01, 2023
Perfect implement. Model shared. x0.5 (Top1:60.646) and 1.0x (Top1:69.402).

Shufflenet-v2-Pytorch Introduction This is a Pytorch implementation of faceplusplus's ShuffleNet-v2. For details, please read the following papers:

423 Dec 07, 2022
Job Assignment System by Real-time Emotion Detection

Emotion-Detection Job Assignment System by Real-time Emotion Detection Emotion is the essential role of facial expression and it could provide a lot o

1 Feb 08, 2022
Official code for MPG2: Multi-attribute Pizza Generator: Cross-domain Attribute Control with Conditional StyleGAN

This is the official code for Multi-attribute Pizza Generator (MPG2): Cross-domain Attribute Control with Conditional StyleGAN. Paper Demo Setup Envir

Fangda Han 5 Sep 01, 2022
Implementation of Sequence Generative Adversarial Nets with Policy Gradient

SeqGAN Requirements: Tensorflow r1.0.1 Python 2.7 CUDA 7.5+ (For GPU) Introduction Apply Generative Adversarial Nets to generating sequences of discre

Lantao Yu 2k Dec 29, 2022
TransVTSpotter: End-to-end Video Text Spotter with Transformer

TransVTSpotter: End-to-end Video Text Spotter with Transformer Introduction A Multilingual, Open World Video Text Dataset and End-to-end Video Text Sp

weijiawu 66 Dec 26, 2022
TEA: A Sequential Recommendation Framework via Temporally Evolving Aggregations

TEA: A Sequential Recommendation Framework via Temporally Evolving Aggregations Requirements python 3.6 torch 1.9 numpy 1.19 Quick Start The experimen

DMIRLAB 4 Oct 16, 2022
It is the assignment for COMP 576 in Rice University

COMP-576 It is the assignment for COMP 576 in Rice University There are two programming assignments and one Final Project. Assignment 1: It is a MLP a

Maojie Tang 1 Nov 25, 2021
TorchX: A PyTorch Extension Library for More Efficient Deep Learning

TorchX TorchX: A PyTorch Extension Library for More Efficient Deep Learning. @misc{torchx, author = {Ansheng You and Changxu Wang}, title = {T

Donny You 8 May 28, 2022
Code accompanying the paper "Wasserstein GAN"

Wasserstein GAN Code accompanying the paper "Wasserstein GAN" A few notes The first time running on the LSUN dataset it can take a long time (up to an

3.1k Jan 01, 2023
Grounding Representation Similarity with Statistical Testing

Grounding Representation Similarity with Statistical Testing This repo contains code to replicate the results in our paper, which evaluates representa

26 Dec 02, 2022
A TensorFlow implementation of FCN-8s

FCN-8s implementation in TensorFlow Contents Overview Examples and demo video Dependencies How to use it Download pre-trained VGG-16 Overview This is

Pierluigi Ferrari 50 Aug 08, 2022
Realistic lighting in ursina!

Ursina Lighting Realistic lighting in ursina! If you want to have realistic lighting in ursina, import the UrsinaLighting.py in your project and use t

17 Jul 07, 2022
A Kaggle competition: discriminate gender based on handwriting

Gender discrimination based on handwriting See http://fastml.com/gender-discrimination/ for description. prep_data.py - a first step chunk_by_authors.

Zygmunt Zając 22 Jul 20, 2022
Pytorch implementation of our paper LIMUSE: LIGHTWEIGHT MULTI-MODAL SPEAKER EXTRACTION.

LiMuSE Overview Pytorch implementation of our paper LIMUSE: LIGHTWEIGHT MULTI-MODAL SPEAKER EXTRACTION. LiMuSE explores group communication on a multi

Auditory Model and Cognitive Computing Lab 17 Oct 26, 2022
Source code for CIKM 2021 paper for Relation-aware Heterogeneous Graph for User Profiling

RHGN Source code for CIKM 2021 paper for Relation-aware Heterogeneous Graph for User Profiling Dependencies torch==1.6.0 torchvision==0.7.0 dgl==0.7.1

Big Data and Multi-modal Computing Group, CRIPAC 6 Nov 29, 2022
(NeurIPS 2020) Wasserstein Distances for Stereo Disparity Estimation

Wasserstein Distances for Stereo Disparity Estimation Accepted in NeurIPS 2020 as Spotlight. [Project Page] Wasserstein Distances for Stereo Disparity

Divyansh Garg 92 Dec 12, 2022
PyTorch implementation of U-TAE and PaPs for satellite image time series panoptic segmentation.

Panoptic Segmentation of Satellite Image Time Series with Convolutional Temporal Attention Networks (ICCV 2021) This repository is the official implem

71 Jan 04, 2023
this is a lite easy to use virtual keyboard project for anyone to use

virtual_Keyboard this is a lite easy to use virtual keyboard project for anyone to use motivation I made this for this year's recruitment for RobEn AA

Mohamed Emad 3 Oct 23, 2021
AWS provides a Python SDK, "Boto3" ,which can be used to access the AWS-account from the local.

Boto3 - The AWS SDK for Python Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python, which allows Python developers to wri

Shreyas Srivastava 1 Oct 25, 2021