Code for: https://berkeleyautomation.github.io/bags/

Overview

DeformableRavens

Code for the paper Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks. Here is the project website, which also contains the data we used to train policies. Contents of this README:

Installation

This is how to get the code running on a local machine. First, get conda on the machine if it isn't there already:

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh

Then, create a new Python 3.7 conda environment (e.g., named "py3-defs") and activate it:

conda create -n py3-defs python=3.7
conda activate py3-defs

Then install:

./install_python_ubuntu.sh

Note I: It is tested on Ubuntu 18.04. We have not tried other Ubuntu versions or other operating systems.

Note II: Installing TensorFlow using conda is usually easier than pip because the conda version will ship with the correct CUDA and cuDNN libraries, whereas the pip version is a nightmare regarding version compatibility.

Note III: the code has only been tested with PyBullet 3.0.4. In fact, there are some places which explicitly hard-code this requirement. Using later versions may work but is not recommended.

Environments and Tasks

This repository contains tasks in the ICRA 2021 submission and the predecessor paper on Transporters (presented at CoRL 2020). For the latter paper, there are (roughly) 10 tasks that came pre-shipped; the Transporters paper doesn't test with pushing or insertion-translation, but tests with all others. See Tasks.md for some task-specific documentation

Each task subclasses a Task class and needs to define its own reset(). The Task class defines an oracle policy that's used to get demonstrations (so it is not implemented within each task subclass), and is divided into cases depending on the action, or self.primitive, used.

Similarly, different tasks have different reward functions, but all are integrated into the Task super-class and divided based on the self.metric type: pose or zone.

Code Usage

Experiments start with python main.py, with --disp added for seeing the PyBullet GUI (but not used for large-scale experiments). The general logic for main.py proceeds as follows:

  • Gather expert demonstrations for the task and put it in data/{TASK}, unless there are already a sufficient amount of demonstrations. There are sub-directories for action, color, depth, info, etc., which store the data pickle files with consistent indexing per time step. Caution: this will start "counting" the data from the existing data/ directory. If you want entirely fresh data, delete the relevant file in data/.

  • Given the data, train the designated agent. The logged data is stored in logs/{AGENT}/{TASK}/{DATE}/{train}/ in the form of a tfevent file for TensorBoard. Note: it will do multiple training runs for statistical significance.

For deformables, we actually use a separate load.py script, due to some issues with creating multiple environments.

See Commands.md for commands to reproduce experimental results.

Downloading the Data

We normally generate 1000 demos for each of the tasks. However, this can take a long time, especially for the bag tasks. We have pre-generated datasets for all the tasks we tested with on the project website. Here's how to do this. For example, suppose we want to download demonstration data for the "bag-color-goal" task. Download the demonstration data from the website. Since this is also a goal-conditioned task, download the goal demonstrations as well. Make new data/ and goals/ directories and put the tar.gz files in the respective directories:

deformable-ravens/
    data/
        bag-color-goal_1000_demos_480Hz_filtered_Nov13.tar.gz
    goals/
        bag-color-goal_20_goals_480Hz_Nov19.tar.gz

Note: if you generate data using the main.py script, then it will automatically create the data/ scripts, and similarly for the generate_goals.py script. You only need to manually create data/ and goals/ if you only want to download and get pre-existing datasets in the right spot.

Then untar both of them in their respective directories:

tar -zxvf bag-color-goal_1000_demos_480Hz_filtered_Nov13.tar.gz
tar -zxvf bag-color-goal_20_goals_480Hz_Nov19.tar.gz

Now the data should be ready! If you want to inspect and debug the data, for example the goals data, then do:

python ravens/dataset.py --path goals/bag-color-goal/

Note that by default it saves any content in goals/ to goals_out/ and data in data/ to data_out/. Also, by default, it will download and save images. This can be very computationally intensive if you do this for the full 1000 demos. (The goals/ data only has 20 demos.) You can change this easily in the main method of ravens/datasets.py.

Running the script will print out some interesting data statistics for you.

Miscellaneous

If you have questions, please use the public issue tracker, so that all of us can benefit from your questions.

If you find this code or research paper helpful, please consider citing it:

@inproceedings{seita_bags_2021,
    author  = {Daniel Seita and Pete Florence and Jonathan Tompson and Erwin Coumans and Vikas Sindhwani and Ken Goldberg and Andy Zeng},
    title   = {{Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks}},
    journal = {arXiv preprint arXiv:2012.03385},
    Year    = {2020}
}
Owner
Daniel Seita
Computer science Ph.D. student at UC Berkeley working in Artificial Intelligence.
Daniel Seita
Edge Restoration Quality Assessment

ERQA - Edge Restoration Quality Assessment ERQA - a full-reference quality metric designed to analyze how good image and video restoration methods (SR

MSU Video Group 27 Dec 17, 2022
Repo for "Benchmarking Robustness of 3D Point Cloud Recognition against Common Corruptions" https://arxiv.org/abs/2201.12296

Benchmarking Robustness of 3D Point Cloud Recognition against Common Corruptions This repo contains the dataset and code for the paper Benchmarking Ro

Jiachen Sun 168 Dec 29, 2022
Few-NERD: Not Only a Few-shot NER Dataset

Few-NERD: Not Only a Few-shot NER Dataset This is the source code of the ACL-IJCNLP 2021 paper: Few-NERD: A Few-shot Named Entity Recognition Dataset.

THUNLP 319 Dec 30, 2022
Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition - NeurIPS2021

Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition Project Page | Video | Paper Implementation for Neural-PIL. A novel method wh

Computergraphics (University of Tübingen) 64 Dec 29, 2022
Easy-to-use,Modular and Extendible package of deep-learning based CTR models .

DeepCTR DeepCTR is a Easy-to-use,Modular and Extendible package of deep-learning based CTR models along with lots of core components layers which can

浅梦 6.6k Jan 08, 2023
Convert onnx models to pytorch.

onnx2torch onnx2torch is an ONNX to PyTorch converter. Our converter: Is easy to use – Convert the ONNX model with the function call convert; Is easy

ENOT 264 Dec 30, 2022
This is a pytorch implementation for the BST model from Alibaba https://arxiv.org/pdf/1905.06874.pdf

Behavior-Sequence-Transformer-Pytorch This is a pytorch implementation for the BST model from Alibaba https://arxiv.org/pdf/1905.06874.pdf This model

Jaime Ferrando Huertas 83 Jan 05, 2023
Visualizing Yolov5's layers using GradCam

YOLO-V5 GRADCAM I constantly desired to know to which part of an object the object-detection models pay more attention. So I searched for it, but I di

Pooya Mohammadi Kazaj 200 Jan 01, 2023
A PyTorch implementation of " EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks."

EfficientNet A PyTorch implementation of EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. [arxiv] [Official TF Repo] Implemen

AhnDW 298 Dec 10, 2022
Code for Recurrent Mask Refinement for Few-Shot Medical Image Segmentation (ICCV 2021).

Recurrent Mask Refinement for Few-Shot Medical Image Segmentation Steps Install any missing packages using pip or conda Preprocess each dataset using

XIE LAB @ UCI 39 Dec 08, 2022
Rendering color and depth images for ShapeNet models.

Color & Depth Renderer for ShapeNet This library includes the tools for rendering multi-view color and depth images of ShapeNet models. Physically bas

Yinyu Nie 41 Dec 19, 2022
Sionna: An Open-Source Library for Next-Generation Physical Layer Research

Sionna: An Open-Source Library for Next-Generation Physical Layer Research Sionna™ is an open-source Python library for link-level simulations of digi

NVIDIA Research Projects 313 Dec 22, 2022
Implementation for Shape from Polarization for Complex Scenes in the Wild

sfp-wild Implementation for Shape from Polarization for Complex Scenes in the Wild project website | paper Code and dataset will be released soon. Int

Chenyang LEI 41 Dec 23, 2022
General neural ODE and DAE modules for power system dynamic modeling.

Py_PSNODE General neural ODE and DAE modules for power system dynamic modeling. The PyTorch-based ODE solver is developed based on torchdiffeq. Sample

14 Dec 31, 2022
Machine Learning Toolkit for Kubernetes

Kubeflow the cloud-native platform for machine learning operations - pipelines, training and deployment. Documentation Please refer to the official do

Kubeflow 12.1k Jan 03, 2023
Code for Learning to Segment The Tail (LST)

Learning to Segment the Tail [arXiv] In this repository, we release code for Learning to Segment The Tail (LST). The code is directly modified from th

47 Nov 07, 2022
Constructing interpretable quadratic accuracy predictors to serve as an objective function for an IQCQP problem that represents NAS under latency constraints and solve it with efficient algorithms.

IQNAS: Interpretable Integer Quadratic programming Neural Architecture Search Realistic use of neural networks often requires adhering to multiple con

0 Oct 24, 2021
Implementation of "A Deep Learning Loss Function based on Auditory Power Compression for Speech Enhancement" by pytorch

This repository is used to suspend the results of our paper "A Deep Learning Loss Function based on Auditory Power Compression for Speech Enhancement"

ScorpioMiku 19 Sep 30, 2022
IndoNLI: A Natural Language Inference Dataset for Indonesian

IndoNLI: A Natural Language Inference Dataset for Indonesian This is a repository for data and code accompanying our EMNLP 2021 paper "IndoNLI: A Natu

15 Feb 10, 2022
An Unbiased Learning To Rank Algorithms (ULTRA) toolbox

Unbiased Learning to Rank Algorithms (ULTRA) This is an Unbiased Learning To Rank Algorithms (ULTRA) toolbox, which provides a codebase for experiment

back 3 Nov 18, 2022