RGB-stacking ๐Ÿ›‘ ๐ŸŸฉ ๐Ÿ”ท for robotic manipulation

Overview

RGB-stacking ๐Ÿ›‘ ๐ŸŸฉ ๐Ÿ”ท for robotic manipulation

BLOG | PAPER | VIDEO

Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes,
Alex X. Lee*, Coline Devin*, Yuxiang Zhou*, Thomas Lampe*, Konstantinos Bousmalis*, Jost Tobias Springenberg*, Arunkumar Byravan, Abbas Abdolmaleki, Nimrod Gileadi, David Khosid, Claudio Fantacci, Jose Enrique Chen, Akhil Raju, Rae Jeong, Michael Neunert, Antoine Laurens, Stefano Saliceti, Federico Casarini, Martin Riedmiller, Raia Hadsell, Francesco Nori.
In Conference on Robot Learning (CoRL), 2021.

The RGB environment

This repository contains an implementation of the simulation environment described in the paper "Beyond Pick-and-Place: Tackling robotic stacking of diverse shapes". Note that this is a re-implementation of the environment (to remove dependencies on internal libraries). As a result, not all the features described in the paper are available at this point. Noticeably, domain randomization is not included in this release. We also aim to provide reference performance metrics of trained policies on this environment in the near future.

In this environment, the agent controls a robot arm with a parallel gripper above a basket, which contains three objects โ€” one red, one green, and one blue, hence the name RGB. The agent's task is to stack the red object on top of the blue object, within 20 seconds, while the green object serves as an obstacle and distraction. The agent controls the robot using a 4D Cartesian controller. The controlled DOFs are x, y, z and rotation around the z axis. The simulation is a MuJoCo environment built using the Modular Manipulation (MoMa) framework.

Corresponding method

The RGB-stacking paper "Beyond Pick-and-Place: Tackling robotic stacking of diverse shapes" also contains a description and thorough evaluation of our initial solution to both the 'Skill Mastery' (training on the 5 designated test triplets and evaluating on them) and the 'Skill Generalization' (training on triplets of training objects and evaluating on the 5 test triplets). Our approach was to first train a state-based policy in simulation via a standard RL algorithm (we used MPO) followed by interactive distillation of the state-based policy into a vision-based policy (using a domain randomized version of the environment) that we then deployed to the robot via zero-shot sim-to-real transfer. We finally improved the policy further via offline RL based on data collected from the sim-to-real policy (we used CRR). For details on our method and the results please consult the paper.

Installing and visualizing the environment

Please ensure that you have a working MuJoCo200 installation and a valid MuJoCo licence.

  1. Clone this repository:

    git clone https://github.com/deepmind/rgb_stacking.git
    cd rgb_stacking
  2. Prepare a Python 3 environment - venv is recommended.

    python3 -m venv rgb_stacking_venv
    source rgb_stacking_venv/bin/activate
  3. Install dependencies:

    pip install -r requirements.txt
  4. Run the environment viewer:

    python -m rgb_stacking.main

Step 2-4 can also be done by running the run.sh script:

./run.sh

Specifying the object triplet

The default environment will load with Triplet 4 (see Sect. 3.2.1 in the paper). If you wish to use a different triplet you can use the following commands:

from rgb_stacking import environment

env = environment.rgb_stacking(object_triplet=NAME_OF_SET)

The possible NAME_OF_SET are:

  • rgb_test_triplet{i} where i is one of 1, 2, 3, 4, 5: Loads test triplet i.
  • rgb_test_random: Randomly loads one of the 5 test triplets.
  • rgb_train_random: Triplet comprised of blocks from the training set.
  • rgb_heldout_random: Triplet comprised of blocks from the held-out set.

For more information on the blocks and the possible options, please refer to the rgb_objects repository.

Specifying the observation space

By default, the observations exposed by the environment are only the ones we used for training our state-based agents. To use another set of observations please use the following code snippet:

from rgb_stacking import environment

env = environment.rgb_stacking(
    observations=environment.ObservationSet.CHOSEN_SET)

The possible CHOSEN_SET are:

  • STATE_ONLY: Only the state observations, used for training expert policies from state in simulation (stage 1).
  • VISION_ONLY: Only image observations.
  • ALL: All observations.
  • INTERACTIVE_IMITATION_LEARNING: Pair of image observations and a subset of proprioception observations, used for interactive imitation learning (stage 2).
  • OFFLINE_POLICY_IMPROVEMENT: Pair of image observations and a subset of proprioception observations, used for the one-step offline policy improvement (stage 3).

Real RGB-Stacking Environment: CAD models and assembly instructions

The CAD model of the setup is available in onshape.

We also provide the following documents for the assembly of the real cell:

  • Assembly instructions for the basket.
  • Assembly instructions for the robot.
  • Assembly instructions for the cell.
  • The bill of materials of all the necessary parts.
  • A diagram with the wiring of cell.

The RGB-objects themselves can be 3D-printed using the STLs available in the rgb_objects repository.

Citing

If you use rgb_stacking in your work, please cite the accompanying paper:

@inproceedings{lee2021rgbstacking,
    title={Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes},
    author={Alex X. Lee and
            Coline Devin and
            Yuxiang Zhou and
            Thomas Lampe and
            Konstantinos Bousmalis and
            Jost Tobias Springenberg and
            Arunkumar Byravan and
            Abbas Abdolmaleki and
            Nimrod Gileadi and
            David Khosid and
            Claudio Fantacci and
            Jose Enrique Chen and
            Akhil Raju and
            Rae Jeong and
            Michael Neunert and
            Antoine Laurens and
            Stefano Saliceti and
            Federico Casarini and
            Martin Riedmiller and
            Raia Hadsell and
            Francesco Nori},
    booktitle={Conference on Robot Learning (CoRL)},
    year={2021},
    url={https://openreview.net/forum?id=U0Q8CrtBJxJ}
}
Owner
DeepMind
DeepMind
Memory Defense: More Robust Classificationvia a Memory-Masking Autoencoder

Memory Defense: More Robust Classificationvia a Memory-Masking Autoencoder Authors: - Eashan Adhikarla - Dan Luo - Dr. Brian D. Davison Abstract Many

Eashan Adhikarla 4 Dec 25, 2022
Python Wrapper for Embree

pyembree Python Wrapper for Embree Installation You can install pyembree (and embree) via the conda-forge package. $ conda install -c conda-forge pyem

Anthony Scopatz 67 Dec 24, 2022
iNAS: Integral NAS for Device-Aware Salient Object Detection

iNAS: Integral NAS for Device-Aware Salient Object Detection Introduction Integral search design (jointly consider backbone/head structures, design/de

้กพๅฎ‡่ถ… 77 Dec 02, 2022
Simulate genealogical trees and genomic sequence data using population genetic models

msprime msprime is a population genetics simulator based on tskit. Msprime can simulate random ancestral histories for a sample of individuals (consis

Tskit developers 150 Dec 14, 2022
Music source separation is a task to separate audio recordings into individual sources

Music Source Separation Music source separation is a task to separate audio recordings into individual sources. This repository is an PyTorch implmeme

Bytedance Inc. 958 Jan 03, 2023
Official PyTorch Implementation of Convolutional Hough Matching Networks, CVPR 2021 (oral)

Convolutional Hough Matching Networks This is the implementation of the paper "Convolutional Hough Matching Network" by J. Min and M. Cho. Implemented

Juhong Min 70 Nov 22, 2022
Semi-Supervised Signed Clustering Graph Neural Network (and Implementation of Some Spectral Methods)

SSSNET SSSNET: Semi-Supervised Signed Network Clustering For details, please read our paper. Environment Setup Overview The project has been tested on

Yixuan He 9 Nov 24, 2022
PyTorch code for JEREX: Joint Entity-Level Relation Extractor

JEREX: "Joint Entity-Level Relation Extractor" PyTorch code for JEREX: "Joint Entity-Level Relation Extractor". For a description of the model and exp

LAVIS - NLP Working Group 50 Dec 01, 2022
A PyTorch Implementation of Gated Graph Sequence Neural Networks (GGNN)

A PyTorch Implementation of GGNN This is a PyTorch implementation of the Gated Graph Sequence Neural Networks (GGNN) as described in the paper Gated G

Ching-Yao Chuang 427 Dec 13, 2022
Some simple programs built in Python: webcam with cv2 that detects eyes and face, with grayscale filter

Programas en Python Algunos programas simples creados en Python: ๐Ÿ“น Webcam con c

Madirex 1 Feb 15, 2022
PyTorch source code for Distilling Knowledge by Mimicking Features

LSHFM.detection This is the PyTorch source code for Distilling Knowledge by Mimicking Features. And this project contains code for object detection wi

Guo-Hua Wang 4 Dec 17, 2022
noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.

ProSelfLC: CVPR 2021 ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks For any specific discussion or potential fu

amos_xwang 57 Dec 04, 2022
Escaping the Gradient Vanishing: Periodic Alternatives of Softmax in Attention Mechanism

Period-alternatives-of-Softmax Experimental Demo for our paper 'Escaping the Gradient Vanishing: Periodic Alternatives of Softmax in Attention Mechani

slwang9353 0 Sep 06, 2021
This code implements constituency parse tree aggregation

README This code implements constituency parse tree aggregation. Folder details code: This folder contains the code that implements constituency parse

Adithya Kulkarni 0 Oct 11, 2021
A collection of Reinforcement Learning algorithms from Sutton and Barto's book and other research papers implemented in Python.

Reinforcement-Learning-Notebooks A collection of Reinforcement Learning algorithms from Sutton and Barto's book and other research papers implemented

Pulkit Khandelwal 1k Dec 28, 2022
This is a project based on retinaface face detection, including ghostnet and mobilenetv3

English | ็ฎ€ไฝ“ไธญๆ–‡ RetinaFace in PyTorch Chinese detailed blog๏ผšhttps://zhuanlan.zhihu.com/p/379730820 Face recognition with masks is still robust---------

pogg 59 Dec 21, 2022
Official and maintained implementation of the paper "OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data" [BMVC 2021].

OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data Christoph Reich, Tim Prangemeier, ร–zdemir Cetin & Heinz Koeppl | Pr

Christoph Reich 23 Sep 21, 2022
A Kitti Road Segmentation model implemented in tensorflow.

KittiSeg KittiSeg performs segmentation of roads by utilizing an FCN based model. The model achieved first place on the Kitti Road Detection Benchmark

Marvin Teichmann 890 Jan 04, 2023
Unsupervised Foreground Extraction via Deep Region Competition

Unsupervised Foreground Extraction via Deep Region Competition [Paper] [Code] The official code repository for NeurIPS 2021 paper "Unsupervised Foregr

28 Nov 06, 2022