Dynamic Environments with Deformable Objects (DEDO)

Related tags

Deep Learningdedo
Overview

DEDO  - Dynamic Environments with Deformable Objects

DEDO - Dynamic Environments with Deformable Objects

DEDO is a lightweight and customizable suite of environments with deformable objects. It is aimed for researchers in the machine learning, reinforcement learning, robotics and computer vision communities. The suite provides a set of every day tasks that involve deformables, such as hanging cloth, dressing a person, and buttoning buttons. We provide examples for integrating two popular reinforcement learning libraries: StableBaselines3 and RLlib. We also provide reference implementaionts for training a various Variational Autoencoder variants with our environment. DEDO is easy to set up and has few dependencies, it is highly parallelizable and supports a wide range of customizations: loading custom objects and textures, adjusting material properties.

<<<<<<< HEAD

Note: updates for this repo are in progress (until the presentation at NeurIPS2021 in mid-December).

@inproceedings{dedo2021,
  title={Dynamic Environments with Deformable Objects},
  author={Rika Antonova and Peiyang Shi and Hang Yin and Zehang Weng and Danica Kragic},
  booktitle={Conference on Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track},
  year={2021},
}

d221b6994e8189457ea6f0513e6807824d11bb29 Table of Contents:
Installation
GettingStarted
Tasks
Use with RL
Use with VAE
Customization

Please refer to Wiki for the full documentation

Installation

Optional initial step: create a new conda environment with conda create --name dedo python=3.7 and activate it with conda activate dedo. Conda is not strictly needed, alternatives like virtualenv can be used; a direct install without using virtual environments is ok as well.

git clone https://github.com/contactrika/dedo
cd dedo
pip install numpy  # important: Nessasary for compiling numpy-enabled PyBullet
pip install -e .

Python3.7 is recommended as we have encountered that on some OS + CPU combo, PyBullet could not be compiled with Numpy enabled in Pip Python 3.8. To enable recording/logging videos install ffmpeg:

sudo apt-get install ffmpeg

See more in Installation Guide in wiki

Getting started

To get started, one can run one of the following commands to visualize the tasks through a hard-coded policy.

python -m dedo.demo --env=HangGarment-v1 --viz --debug
  • dedo.demo is the demo module
  • --env=HangGarment-v1 specifies the environment
  • --viz enables the GUI
  • ---debug outputs additional information in the console
  • --cam_resolution 400 specifies the size of the output window

See more in Usage-guide

Tasks

See more in Task Overview

We provide a set of 10 tasks involving deformable objects, most tasks contains 5 handmade deformable objects. There are also two procedurally generated tasks, ButtonProc and HangProcCloth, in which the deformable objects are procedurally generated. Furthermore, to improve generalzation, the v0 of each task will randomizes textures and meshes.

All tasks have -v1 and -v2 with a particular choice of meshes and textures that is not randomized. Most tasks have versions up to -v5 with additional mesh and texture variations.

Tasks with procedurally generated cloth (ButtonProc and HangProcCloth) generate random cloth objects for all versions (but randomize textures only in v0).

HangBag

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=HangBag-v1 --viz

HangBag-v0: selects one of 108 bag meshes; randomized textures

HangBag-v[1-3]: three bag versions with textures shown below:

images/imgs/hang_bags_annotated.jpg

HangGarment

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=HangGarment-v1 --viz

HangGarment-v0: hang garment with randomized textures (a few examples below):

HangGarment-v[1-5]: 5 apron meshes and texture combos shown below:

images/imgs/hang_garments_5.jpg

HangGarment-v[6-10]: 5 shirt meshes and texture combos shown below:

images/imgs/hang_shirts_5.jpg

HangProcCloth

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=HangProcCloth-v1 --viz

HangProcCloth-v0: random textures, procedurally generated cloth with 1 and 2 holes.

HangProcCloth-v[1-2]: same, but with either 1 or 2 holes

images/imgs/hang_proc_cloth.jpg

Buttoning

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=Button-v1 --viz

ButtonProc-v0: randomized textures and procedurally generated cloth with 2 holes, randomized hole/button positions.

ButtonProc-v[1-2]: procedurally generated cloth, 1 or two holes.

images/imgs/button_proc.jpg

Button-v0: randomized textures, but fixed cloth and button positions.

Button-v1: fixed cloth and button positions with one texture (see image below):

images/imgs/button.jpg

Hoop

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=Hoop-v1 --viz

Hoop-v0: randomized textures Hoop-v1: pre-selected textures images/imgs/hoop_and_lasso.jpg

Lasso

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=Lasso-v1 --viz

Lasso-v0: randomized textures Lasso-v1: pre-selected textures

DressBag

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=DressBag-v1 --viz

DressBag-v0, DressBag-v[1-5]: demo for -v1 shown below

images/imgs/dress_bag.jpg

Visualizations of the 5 backpack mesh and texture variants for DressBag-v[1-5]:

images/imgs/backpack_meshes.jpg

DressGarment

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=DressGarment-v1 --viz

DressGarment-v0, DressGarment-v[1-5]: demo for -v1 shown below

images/imgs/dress_garment.jpg

Mask

python -m dedo.demo_preset --env=Mask-v1 --viz

Mask-v0, Mask-v[1-5]: a few texture variants shown below: images/imgs/dress_garment.jpg

RL Examples

dedo/run_rl_sb3.py gives an example of how to train an RL algorithm from Stable Baselines 3:

python -m dedo.run_rl_sb3 --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

dedo/run_rllib.py gives an example of how to train an RL algorithm using RLLib:

python -m dedo.run_rllib --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

For documentation, please refer to Arguments Reference page in wiki

To launch the Tensorboard:

tensorboard --logdir=/tmp/dedo --bind_all --port 6006 \
  --samples_per_plugin images=1000

SVAE Examples

dedo/run_svae.py gives an example of how to train various flavors of VAE:

python -m dedo.run_rl_sb3 --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

dedo/run_rllib.py gives an example of how to train an RL algorithm from Stable Baselines 3:

python -m dedo.run_rl_sb3 --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

To launch the Tensorboard:

tensorboard --logdir=/tmp/dedo --bind_all --port 6006 \
  --samples_per_plugin images=1000

Customization

To load custom object you would first have to fill an entry in DEFORM_INFO in task_info.py. The key should the the .obj file path relative to data/:

DEFORM_INFO = {
...
    # An example of info for a custom item.
    'bags/custom.obj': {
        'deform_init_pos': [0, 0.47, 0.47],
        'deform_init_ori': [np.pi/2, 0, 0],
        'deform_scale': 0.1,
        'deform_elastic_stiffness': 1.0,
        'deform_bending_stiffness': 1.0,
        'deform_true_loop_vertices': [
            [0, 1, 2, 3]  # placeholder, since we don't know the true loops
        ]
    },

Then you can use --override_deform_obj flag:

python -m dedo.demo --env=HangBag-v0 --cam_resolution 200 --viz --debug \
    --override_deform_obj bags/custom.obj

For items not in DEFORM_DICT you will need to specify sensible defaults, for example:

python -m dedo.demo --env=HangGarment-v0 --viz --debug \
  --override_deform_obj=generated_cloth/generated_cloth.obj \
  --deform_init_pos 0.02 0.41 0.63 --deform_init_ori 0 0 1.5708

Example of scaling up the custom mesh objects:

python -m dedo.demo --env=HangGarment-v0 --viz --debug \
   --override_deform_obj=generated_cloth/generated_cloth.obj \
   --deform_init_pos 0.02 0.41 0.55 --deform_init_ori 0 0 1.5708 \
   --deform_scale 2.0 --anchor_init_pos -0.10 0.40 0.70 \
   --other_anchor_init_pos 0.10 0.40 0.70

See more in Customization Wiki

Additonal Assets

BGarment dataset is adapter from Berkeley Garment Library

Sewing dataset is adapted from Generating Datasets of 3D Garments with Sewing Patterns

You might also like...
PyTorch implementation of Deformable Convolution

Deformable Convolutional Networks in PyTorch This repo is an implementation of Deformable Convolution. Ported from author's MXNet implementation. Buil

Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..
Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..

ARAPReg Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators.. Installation The cod

PyTorch implementation of Deformable Convolution
PyTorch implementation of Deformable Convolution

PyTorch implementation of Deformable Convolution !!!Warning: There is some issues in this implementation and this repo is not maintained any more, ple

A multi-scale unsupervised learning for deformable image registration

A multi-scale unsupervised learning for deformable image registration Shuwei Shao, Zhongcai Pei, Weihai Chen, Wentao Zhu, Xingming Wu and Baochang Zha

Some code of the implements of Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network

3D-GMPDCNN Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network PyTorch implementation of "Geological Modeling Usin

MoCoPnet - Deformable 3D Convolution for Video Super-Resolution
MoCoPnet - Deformable 3D Convolution for Video Super-Resolution

Deformable 3D Convolution for Video Super-Resolution Pytorch implementation of l

3D2Unet: 3D Deformable Unet for Low-Light Video Enhancement (PRCV2021)

3DDUNET This is the code for 3D2Unet: 3D Deformable Unet for Low-Light Video Enhancement (PRCV2021) Conference Paper Link Dataset We use SMOID dataset

Selfplay In MultiPlayer Environments
Selfplay In MultiPlayer Environments

This project allows you to train AI agents on custom-built multiplayer environments, through self-play reinforcement learning.

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.
Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy. Now with tensorflow 1.0 support. Evaluation usa

Comments
  • Adding Point Cloud Observations to DEDO

    Adding Point Cloud Observations to DEDO

    This PR adds point cloud (pcd) rendering to the DEDO. Summary of changes:

    • Point cloud data extracted from sim environment based on a set of object ids that we want to retain
    • Depth cameras are instantiated using a cameraConfig class, which abstracts out the various camera configurations needed.
    • The cameraConfig class loads camera configs from JSON (for easy loading & sharing of camera configs), or directly by instantiation (if you know how you want to dynamically set your camera).
    • Some sample JSON camera configs are provided (4 total)
    • Unprojecting from depth image to point cloud is vectorized, so rendering point cloud observations adds negligible runtime to overall pipeline process time (should benchmark this?).
    • The original deform_env had to be adjusted so that the deformable object would have ID 0. For some reason, pybullet only renders the deformable if this is true.

    Known issues:

    • The floor has disappeared from the visual.
    opened by edwin-pan 3
  • Enables base motion on fetch robot with 1 anchor

    Enables base motion on fetch robot with 1 anchor

    Changes allow the fetch robot to move towards the hanger with an apron.

    Google Doc that explains the changes: https://docs.google.com/document/d/18_9K29K4N6atvtqUxIqKhq6Bt0YSPhQgfWldUWdyvLM/edit?usp=sharing

    There are some TODO's: related to removing some hardcoded values and improving the results

    opened by Nishantjannu 0
Releases(v0.1)
  • v0.1(Jan 11, 2022)

    This is the initial release of the code and functionality presented at the 35th Conference on Neural Information Processing Systems (NeurIPS 2021) Track on Datasets and Benchmarks in December 2021.

    Source code(tar.gz)
    Source code(zip)
Owner
Rika
Sim-to-real with Reinforcement Learning, Variational Inference, Bayesian Optimization
Rika
Code for the CIKM 2019 paper "DSANet: Dual Self-Attention Network for Multivariate Time Series Forecasting".

Dual Self-Attention Network for Multivariate Time Series Forecasting 20.10.26 Update: Due to the difficulty of installation and code maintenance cause

Kyon Huang 223 Dec 16, 2022
Train neural network for semantic segmentation (deep lab V3) with pytorch in less then 50 lines of code

Train neural network for semantic segmentation (deep lab V3) with pytorch in 50 lines of code Train net semantic segmentation net using Trans10K datas

17 Dec 19, 2022
Reproduce partial features of DeePMD-kit using PyTorch.

DeePMD-kit on PyTorch For better understand DeePMD-kit, we implement its partial features using PyTorch and expose interface consuing descriptors. Tec

Shaochen Shi 8 Dec 17, 2022
A simple, high level, easy-to-use open source Computer Vision library for Python.

ZoomVision : Slicing Aid Detection A simple, high level, easy-to-use open source Computer Vision library for Python. Installation Installing dependenc

Nurettin Sinanoğlu 2 Mar 04, 2022
YKKDetector For Python

YKKDetector OpenCVを利用した機械学習データをもとに、VRChatのスクリーンショットなどからYKKさん(もとい「幽狐族のお姉様」)を検出できるソフトウェアです。 マニュアル こちらから実行環境のセットアップから解説する詳細なマニュアルをご覧いただけます。 ライセンス 本ソフトウェア

あんふぃとらいと 5 Dec 07, 2021
Nested Graph Neural Network (NGNN) is a general framework to improve a base GNN's expressive power and performance

Nested Graph Neural Networks About Nested Graph Neural Network (NGNN) is a general framework to improve a base GNN's expressive power and performance.

Muhan Zhang 38 Jan 05, 2023
FinEAS: Financial Embedding Analysis of Sentiment 📈

FinEAS: Financial Embedding Analysis of Sentiment 📈 (SentenceBERT for Financial News Sentiment Regression) This repository contains the code for gene

LHF Labs 31 Dec 13, 2022
Tracking Progress in Question Answering over Knowledge Graphs

Tracking Progress in Question Answering over Knowledge Graphs Table of contents Question Answering Systems with Descriptions The QA Systems Table cont

Knowledge Graph Question Answering 47 Jan 02, 2023
Instance-level Image Retrieval using Reranking Transformers

Instance-level Image Retrieval using Reranking Transformers Fuwen Tan, Jiangbo Yuan, Vicente Ordonez, ICCV 2021. Abstract Instance-level image retriev

UVA Computer Vision 87 Jan 03, 2023
Open-World Entity Segmentation

Open-World Entity Segmentation Project Website Lu Qi*, Jason Kuen*, Yi Wang, Jiuxiang Gu, Hengshuang Zhao, Zhe Lin, Philip Torr, Jiaya Jia This projec

DV Lab 410 Jan 03, 2023
Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch

🦩 Flamingo - Pytorch Implementation of Flamingo, state-of-the-art few-shot visual question answering attention net, in Pytorch. It will include the p

Phil Wang 630 Dec 28, 2022
Code for Understanding Pooling in Graph Neural Networks

Select, Reduce, Connect This repository contains the code used for the experiments of: "Understanding Pooling in Graph Neural Networks" Setup Install

Daniele Grattarola 37 Dec 13, 2022
Code for the TPAMI paper: "Syntax Customized Video Captioning by Imitating Exemplar Sentences"

Syntax-Customized-Video-Captioning Code for the TPAMI paper: "Syntax Customized Video Captioning by Imitating Exemplar Sentences". This is my second w

3 Dec 05, 2022
BLEURT is a metric for Natural Language Generation based on transfer learning.

BLEURT: a Transfer Learning-Based Metric for Natural Language Generation BLEURT is an evaluation metric for Natural Language Generation. It takes a pa

Google Research 492 Jan 05, 2023
discovering subdomains, hidden paths, extracting unique links

python-website-crawler discovering subdomains, hidden paths, extracting unique links pip install -r requirements.txt discover subdomain: You can give

merve 4 Sep 05, 2022
MLPs for Vision and Langauge Modeling (Coming Soon)

MLP Architectures for Vision-and-Language Modeling: An Empirical Study MLP Architectures for Vision-and-Language Modeling: An Empirical Study (Code wi

Yixin Nie 27 May 09, 2022
"Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices", official implementation

Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices This repository contains the official PyTorch implemen

Yandex Research 21 Oct 18, 2022
TICC is a python solver for efficiently segmenting and clustering a multivariate time series

TICC TICC is a python solver for efficiently segmenting and clustering a multivariate time series. It takes as input a T-by-n data matrix, a regulariz

406 Dec 12, 2022
Meta-learning for NLP

Self-Supervised Meta-Learning for Few-Shot Natural Language Classification Tasks Code for training the meta-learning models and fine-tuning on downstr

IESL 43 Nov 08, 2022
Code for the Paper: Conditional Variational Capsule Network for Open Set Recognition

Conditional Variational Capsule Network for Open Set Recognition This repository hosts the official code related to "Conditional Variational Capsule N

Guglielmo Camporese 35 Nov 21, 2022