[CoRL 2021] A robotics benchmark for cross-embodiment imitation.

Overview

x-magical

build license

x-magical is a benchmark extension of MAGICAL specifically geared towards cross-embodiment imitation. The tasks still provide the Demo/Test structure that allows one to evaluate how well imitation or reward learning techniques can generalize the demonstrator's intent to substantially different deployment settings, but there's an added axis of variation focusing on how well these techniques can adapt to systematic embodiment gaps between the demonstrator and the learner. This is a challenging problem, as different embodiments are likely to use unique and suitable strategies that allow them to make progress on a task.

Embodiments in an x-magical task must still learn the same set of general skills like 2D perception and manipulation, but they are specifically designed such that they solve the task in different ways due to differences in their end-effector, shapes, dynamics, etc. For example, in the sweeping task, some agents can sweep all debris in one motion while others need to sweep them one at a time. These differences in execution speeds and state-action trajectories pose challenges for current LfD techniques, and the ability to generalize across embodiments is precisely what this benchmark evaluates.

x-magical is under active development - stay tuned for more tasks and embodiments!

Tasks, Embodiments and Variants

Each task in x-magical can be instantiated with a particular embodiment, which changes the nature of the robotic agent. Additionally, the task can be instantiated in a particular variant, which changes one or more semantic aspects the environment. Both axes of variation are meant to evaluate combinatorial generalization. We list the task-embodiment pairings below, with a picture of the initial state of the Demo variant:

Task Description
SweepToTop: The agent must sweep all three debris to the goal zone shaded in pink. Embodiments: Gripper, Shortstick, MediumStick, Longstick. Variants: all except Jitter.

Here is a description (source) of what each variant modifies:

Variant Description
Demo The default variant with no randomization, i.e. the same initial state across reset().
Jitter The rotations and orientations of all objects, and the size of goal regions, are jittered by up to 5% of the maximum range.
Layout Positions and rotations of all objects are completely randomized. The definition of what constitutes an "object" is task-dependent, i.e. some tasks might not randomize the pose of the robotic agent, just the pushable shapes.
Color The color of blocks and goal regions is randomized, subject to task-specific constraints.
Shape The shape of pushable blocks is randomized, again subject to task-specific constraints.
Dynamics The mass and friction of objects are randomized.
All All applicable randomizations are applied.

Usage

x-magical environments are available in the Gym registry and can be constructed via string specifiers that take on the form <task>-<embodiment>-<observation_space>-<view_mode>-<variant>-v0, where:

  • task: The name of the desired task. See above for the full list of available tasks.
  • embodiment: The embodiment to use for the robotic agent. See above for the list of supported embodiments per task.
  • observation_space: Whether to use pixel or state-based observations. All environments support pixel observations but they may not necessarily provide state-based observation spaces.
  • view_mode: Whether to use an allocentric or egocentric agent view.
  • variant: The variant of the task to use. See above for the full list of variants.

For example, here's a short code snippet that illustrates this usage:

import gym
import xmagical

# This must be called before making any Gym envs.
xmagical.register_envs()

# List all available environments.
print(xmagical.ALL_REGISTERED_ENVS)

# Create a demo variant for the SweepToTop task with a gripper agent.
env = gym.make('SweepToTop-Gripper-Pixels-Allo-Demo-v0')
obs = env.reset()
print(obs.shape)  # (384, 384, 3)
env.render(mode='human')
env.close()

# Now create a test variant of this task with a shortstick agent,
# an egocentric view and a state-based observation space.
env = gym.make('SweepToTop-Shortstick-State-Ego-TestLayout-v0')
init_obs = env.reset()
print(init_obs.shape)  # (16,)
env.close()

Installation

x-magical requires Python 3.8 or higher. We recommend using an Anaconda environment for installation. You can create one with the following:

conda create -n xmagical python=3.8
conda activate xmagical

Installing PyPI release

pip install x-magical

Installing from source

Clone the repository and install in editable mode:

git clone https://github.com/kevinzakka/x-magical.git
cd x-magical
pip install -r requirements.txt
pip install -e .

Contributing

If you'd like to contribute to this project, you should install the extra development dependencies as follows:

pip install -e .[dev]

Acknowledgments

A big thank you to Sam Toyer, the developer of MAGICAL, for the valuable help and discussions he provided during the development of this benchmark. Please consider citing MAGICAL if you find this repository useful:

@inproceedings{toyer2020magical,
  author    = {Sam Toyer and Rohin Shah and Andrew Critch and Stuart Russell},
  title     = {The {MAGICAL} Benchmark for Robust Imitation},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2020}
}

Additionally, we'd like to thank Brent Yi for fruitful technical discussions and various debugging sessions.

You might also like...
Disagreement-Regularized Imitation Learning
Disagreement-Regularized Imitation Learning

Due to a normalization bug the expert trajectories have lower performance than the rl_baseline_zoo reported experts. Please see the following link in

PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL). Scripts for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation and a convolutional neural network (CNN) for image classification
Scripts for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation and a convolutional neural network (CNN) for image classification

About subwAI subwAI - a project for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation

ilpyt: imitation learning library with modular, baseline implementations in Pytorch
ilpyt: imitation learning library with modular, baseline implementations in Pytorch

ilpyt The imitation learning toolbox (ilpyt) contains modular implementations of common deep imitation learning algorithms in PyTorch, with unified in

PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).
PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).

PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).

Visual Adversarial Imitation Learning using Variational Models (VMAIL)
Visual Adversarial Imitation Learning using Variational Models (VMAIL)

Visual Adversarial Imitation Learning using Variational Models (VMAIL) This is the official implementation of the NeurIPS 2021 paper. Project website

Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Source Code
Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Source Code

Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Source Code

PyTorch implementation of SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching
PyTorch implementation of SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching

SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching This is the official PyTorch implementation of SMODICE: Versatile Offline I

[CVPR 2022] PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision (Oral)
[CVPR 2022] PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision (Oral)

PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision Kehong Gong*, Bingbing Li*, Jianfeng Zhang*, Ta

Comments
  • Pymunk error when trying to run the basic example

    Pymunk error when trying to run the basic example

    I created a new conda env with python 3.8 and tried to run the basic example,

    import gym
    import xmagical
    
    # This must be called before making any Gym envs.
    xmagical.register_envs()
    
    # List all available environments.
    print(xmagical.ALL_REGISTERED_ENVS)
    
    # Create a demo variant for the SweepToTop task with a gripper agent.
    env = gym.make('SweepToTop-Gripper-Pixels-Allo-Demo-v0')
    obs = env.reset()
    

    I get the following error trace,

    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/home/ramans/miniconda3/envs/xmagical/lib/python3.8/site-packages/gym/wrappers/order_enforcing.py", line 16, in reset
        return self.env.reset(**kwargs)
      File "/home/ramans/miniconda3/envs/xmagical/lib/python3.8/site-packages/xmagical/benchmarks/sweep_to_top.py", line 192, in reset
        obs = super().reset()
      File "/home/ramans/miniconda3/envs/xmagical/lib/python3.8/site-packages/xmagical/base_env.py", line 201, in reset
        self.add_entities([self._arena])
      File "/home/ramans/miniconda3/envs/xmagical/lib/python3.8/site-packages/xmagical/base_env.py", line 140, in add_entities
        entity.setup(self.renderer, self._space, self._phys_vars)
      File "/home/ramans/miniconda3/envs/xmagical/lib/python3.8/site-packages/xmagical/entities/arena.py", line 38, in setup
        self.add_to_space(*arena_segments)
      File "/home/ramans/miniconda3/envs/xmagical/lib/python3.8/site-packages/xmagical/entities/base.py", line 100, in add_to_space
        _add(obj)
      File "/home/ramans/miniconda3/envs/xmagical/lib/python3.8/site-packages/xmagical/entities/base.py", line 80, in _add
        self.space.add(obj)
      File "/home/ramans/miniconda3/envs/xmagical/lib/python3.8/site-packages/pymunk/space.py", line 401, in add
        self._add_shape(o)
      File "/home/ramans/miniconda3/envs/xmagical/lib/python3.8/site-packages/pymunk/space.py", line 441, in _add_shape
        assert (
    AssertionError: The shape's body must be added to the space before (or at the same time) as the shape.
    

    Would this be due to some version pinning issue?

    opened by sai-prasanna 6
  • Bump pillow from 8.2.0 to 8.3.2

    Bump pillow from 8.2.0 to 8.3.2

    Bumps pillow from 8.2.0 to 8.3.2.

    Release notes

    Sourced from pillow's releases.

    8.3.2

    https://pillow.readthedocs.io/en/stable/releasenotes/8.3.2.html

    Security

    • CVE-2021-23437 Raise ValueError if color specifier is too long [hugovk, radarhere]

    • Fix 6-byte OOB read in FliDecode [wiredfool]

    Python 3.10 wheels

    • Add support for Python 3.10 #5569, #5570 [hugovk, radarhere]

    Fixed regressions

    • Ensure TIFF RowsPerStrip is multiple of 8 for JPEG compression #5588 [kmilos, radarhere]

    • Updates for ImagePalette channel order #5599 [radarhere]

    • Hide FriBiDi shim symbols to avoid conflict with real FriBiDi library #5651 [nulano]

    8.3.1

    https://pillow.readthedocs.io/en/stable/releasenotes/8.3.1.html

    Changes

    8.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/8.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    8.3.2 (2021-09-02)

    • CVE-2021-23437 Raise ValueError if color specifier is too long [hugovk, radarhere]

    • Fix 6-byte OOB read in FliDecode [wiredfool]

    • Add support for Python 3.10 #5569, #5570 [hugovk, radarhere]

    • Ensure TIFF RowsPerStrip is multiple of 8 for JPEG compression #5588 [kmilos, radarhere]

    • Updates for ImagePalette channel order #5599 [radarhere]

    • Hide FriBiDi shim symbols to avoid conflict with real FriBiDi library #5651 [nulano]

    8.3.1 (2021-07-06)

    • Catch OSError when checking if fp is sys.stdout #5585 [radarhere]

    • Handle removing orientation from alternate types of EXIF data #5584 [radarhere]

    • Make Image.array take optional dtype argument #5572 [t-vi, radarhere]

    8.3.0 (2021-07-01)

    • Use snprintf instead of sprintf. CVE-2021-34552 #5567 [radarhere]

    • Limit TIFF strip size when saving with LibTIFF #5514 [kmilos]

    • Allow ICNS save on all operating systems #4526 [baletu, radarhere, newpanjing, hugovk]

    • De-zigzag JPEG's DQT when loading; deprecate convert_dict_qtables #4989 [gofr, radarhere]

    • Replaced xml.etree.ElementTree #5565 [radarhere]

    ... (truncated)

    Commits
    • 8013f13 8.3.2 version bump
    • 23c7ca8 Update CHANGES.rst
    • 8450366 Update release notes
    • a0afe89 Update test case
    • 9e08eb8 Raise ValueError if color specifier is too long
    • bd5cf7d FLI tests for Oss-fuzz crash.
    • 94a0cf1 Fix 6-byte OOB read in FliDecode
    • cece64f Add 8.3.2 (2021-09-02) [CI skip]
    • e422386 Add release notes for Pillow 8.3.2
    • 08dcbb8 Pillow 8.3.2 supports Python 3.10 [ci skip]
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
Releases(v0.0.4)
Owner
Kevin Zakka
PhD at UC Berkeley. Trying to teach 🤖 skills from videos and natural language instructions.
Kevin Zakka
List of all dependencies affected by node-ipc malicious commit

node-ipc-dependencies-list List of all dependencies affected by node-ipc malicious commit as of 17/3/2022 - 19/3/2022 (timestamp) Please improve upon

99 Oct 15, 2022
The Simplest DCGAN Implementation

DCGAN in TensorLayer This is the TensorLayer implementation of Deep Convolutional Generative Adversarial Networks. Looking for Text to Image Synthesis

TensorLayer Community 310 Dec 13, 2022
Lightwood is Legos for Machine Learning.

Lightwood is like Legos for Machine Learning. A Pytorch based framework that breaks down machine learning problems into smaller blocks that can be glu

MindsDB Inc 312 Jan 08, 2023
Implementation of neural class expression synthesizers

NCES Implementation of neural class expression synthesizers (NCES) Installation Clone this repository: https://github.com/ConceptLengthLearner/NCES.gi

NeuralConceptSynthesis 0 Jan 06, 2022
From the basics to slightly more interesting applications of Tensorflow

TensorFlow Tutorials You can find python source code under the python directory, and associated notebooks under notebooks. Source code Description 1 b

Parag K Mital 5.6k Jan 09, 2023
Code for "NeRS: Neural Reflectance Surfaces for Sparse-View 3D Reconstruction in the Wild," in NeurIPS 2021

Code for Neural Reflectance Surfaces (NeRS) [arXiv] [Project Page] [Colab Demo] [Bibtex] This repo contains the code for NeRS: Neural Reflectance Surf

Jason Y. Zhang 234 Dec 30, 2022
3D Pose Estimation for Vehicles

3D Pose Estimation for Vehicles Introduction This work generates 4 key-points and 2 key-edges from vertices and edges of vehicles as ground truth. The

Jingyi Wang 1 Nov 01, 2021
This is a computer vision based implementation of the popular childhood game 'Hand Cricket/Odd or Even' in python

Hand Cricket Table of Content Overview Installation Game rules Project Details Future scope Overview This is a computer vision based implementation of

Abhinav R Nayak 6 Jan 12, 2022
Official pytorch implementation of "DSPoint: Dual-scale Point Cloud Recognition with High-frequency Fusion"

DSPoint Official pytorch implementation of "DSPoint: Dual-scale Point Cloud Recognition with High-frequency Fusion" Coming soon, as soon as I finish a

Ziyao Zeng 14 Feb 26, 2022
Official PyTorch Implementation of Mask-aware IoU and maYOLACT Detector [BMVC2021]

The official implementation of Mask-aware IoU and maYOLACT detector. Our implementation is based on mmdetection. Mask-aware IoU for Anchor Assignment

Kemal Oksuz 46 Sep 29, 2022
PyTorch reimplementation of hand-biomechanical-constraints (ECCV2020)

Hand Biomechanical Constraints Pytorch Unofficial PyTorch reimplementation of Hand-Biomechanical-Constraints (ECCV2020). This project reimplement foll

Hao Meng 59 Dec 20, 2022
Hierarchical probabilistic 3D U-Net, with attention mechanisms (—𝘈𝘵𝘵𝘦𝘯𝘵𝘪𝘰𝘯 𝘜-𝘕𝘦𝘵, 𝘚𝘌𝘙𝘦𝘴𝘕𝘦𝘵) and a nested decoder structure with deep supervision (—𝘜𝘕𝘦𝘵++).

Hierarchical probabilistic 3D U-Net, with attention mechanisms (—𝘈𝘵𝘵𝘦𝘯𝘵𝘪𝘰𝘯 𝘜-𝘕𝘦𝘵, 𝘚𝘌𝘙𝘦𝘴𝘕𝘦𝘵) and a nested decoder structure with deep supervision (—𝘜𝘕𝘦𝘵++). Built in TensorFlow 2.5. Configured for vox

Diagnostic Image Analysis Group 32 Dec 08, 2022
modelvshuman is a Python library to benchmark the gap between human and machine vision

modelvshuman is a Python library to benchmark the gap between human and machine vision. Using this library, both PyTorch and TensorFlow models can be evaluated on 17 out-of-distribution datasets with

Bethge Lab 244 Jan 03, 2023
Politecnico of Turin Thesis: "Implementation and Evaluation of an Educational Chatbot based on NLP Techniques"

THESIS_CAIRONE_FIORENTINO Politecnico of Turin Thesis: "Implementation and Evaluation of an Educational Chatbot based on NLP Techniques" GENERATE TOKE

cairone_fiorentino97 1 Dec 10, 2021
Source code to accompany Defunctland's video "FASTPASS: A Complicated Legacy"

Shapeland Simulator Source code to accompany Defunctland's video "FASTPASS: A Complicated Legacy" Download the video at https://www.youtube.com/watch?

TouringPlans.com 70 Dec 14, 2022
PyKaldi GOP-DNN on Epa-DB

PyKaldi GOP-DNN on Epa-DB This repository has the tools to run a PyKaldi GOP-DNN algorithm on Epa-DB, a database of non-native English speech by Spani

18 Dec 14, 2022
A Simulation Environment to train Robots in Large Realistic Interactive Scenes

iGibson: A Simulation Environment to train Robots in Large Realistic Interactive Scenes iGibson is a simulation environment providing fast visual rend

Stanford Vision and Learning Lab 493 Jan 04, 2023
A pytorch implementation of faster RCNN detection framework (Use detectron2, it's a masterpiece)

Notice(2019.11.2) This repo was built back two years ago when there were no pytorch detection implementation that can achieve reasonable performance.

Ruotian(RT) Luo 1.8k Jan 01, 2023
YOLOX_AUDIO is an audio event detection model based on YOLOX

YOLOX_AUDIO is an audio event detection model based on YOLOX, an anchor-free version of YOLO. This repo is an implementated by PyTorch. Main goal of YOLOX_AUDIO is to detect and classify pre-defined

intflow Inc. 77 Dec 19, 2022
Unofficial implementation of PatchCore anomaly detection

PatchCore anomaly detection Unofficial implementation of PatchCore(new SOTA) anomaly detection model Original Paper : Towards Total Recall in Industri

Changwoo Ha 268 Dec 22, 2022