[NeurIPS 2021] PyTorch Code for Accelerating Robotic Reinforcement Learning with Parameterized Action Primitives

Overview

Robot Action Primitives (RAPS)

This repository is the official implementation of Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives (RAPS).

[Project Website]

Murtaza Dalal, Deepak Pathak*, Ruslan Salakhutdinov*
(* equal advising)

CMU

alt text

If you find this work useful in your research, please cite:

@inproceedings{dalal2021raps,
    Author = {Dalal, Murtaza and Pathak, Deepak and
              Salakhutdinov, Ruslan},
    Title = {Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives},
    Booktitle = {NeurIPS},
    Year = {2021}
}

Requirements

To install dependencies, please run the following commands:

sudo apt-get update
sudo apt-get install curl \
    git \
    libgl1-mesa-dev \
    libgl1-mesa-glx \
    libglew-dev \
    libosmesa6-dev \
    software-properties-common \
    net-tools \
    unzip \
    vim \
    virtualenv \
    wget \
    xpra \
    xserver-xorg-dev
sudo apt-get install libglfw3-dev libgles2-mesa-dev patchelf
sudo mkdir /usr/lib/nvidia-000

Please add the following to your bashrc:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mujoco200/bin
export MUJOCO_GL='egl'
export MKL_THREADING_LAYER=GNU
export D4RL_SUPPRESS_IMPORT_ERROR='1'
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/nvidia-000

To install python requirements:

conda create -n raps python=3.7
conda activate raps
./setup_python_env.sh <absolute path to raps>

Training and Evaluation

Kitchen

Prior to running any experiments, make sure to run cd /path/to/raps/rlkit

single task env names:

  • microwave
  • kettle
  • slide_cabinet
  • hinge_cabinet
  • light_switch
  • top_left_burner

multi task env names:

  • microwave_kettle_light_top_left_burner //Sequential Multi Task 1
  • hinge_slide_bottom_left_burner_light //Sequential Multi Task 2

To train RAPS with Dreamer on any single task kitchen environment, run:

python experiments/kitchen/dreamer/dreamer_v2_single_task_primitives.py --mode here_no_doodad --exp_prefix <> --env <env name>

To train RAPS with Dreamer on the multi task kitchen environments, run:

python experiments/kitchen/dreamer/dreamer_v2_multi_task_primitives.py --mode here_no_doodad --exp_prefix <> --env <env name>

To train Raw Actions with Dreamer on any kitchen environment

python experiments/kitchen/dreamer/dreamer_v2_raw_actions.py --mode here_no_doodad --exp_prefix <> --env <env name>

To train RAPS with RAD on any single task kitchen environment

python experiments/kitchen/rad/rad_single_task_primitives.py --mode here_no_doodad --exp_prefix <> --env <env name>

To train RAPS with RAD on any multi task kitchen environment

python experiments/kitchen/rad/rad_multi_task_primitives.py --mode here_no_doodad --exp_prefix <> --env <env name>

To train Raw Actions with RAD on any kitchen environment

python experiments/kitchen/rad/rad_raw_actions.py --mode here_no_doodad --exp_prefix <> --env <env name>

To train RAPS with PPO on any single task kitchen environment

python experiments/kitchen/ppo/ppo_single_task_primitives.py --mode here_no_doodad --exp_prefix <> --env <env name>

To train RAPS with PPO on any multi task kitchen environment

python experiments/kitchen/ppo/ppo_multi_task_primitives.py --mode here_no_doodad --exp_prefix <> --env <env name>

To train Raw Actions with PPO on any kitchen environment

python experiments/kitchen/ppo/ppo_raw_actions.py --mode here_no_doodad --exp_prefix <> --env <env name>

Metaworld

single task env names

  • drawer-close-v2
  • soccer-v2
  • peg-unplug-side-v2
  • sweep-into-v2
  • assembly-v2
  • disassemble-v2

To train RAPS with Dreamer on any metaworld environment

python experiments/metaworld/dreamer/dreamer_v2_single_task_primitives.py --mode here_no_doodad --exp_prefix <> --env <env name>

To train Raw Actions with Dreamer on any metaworld environment

python experiments/metaworld/dreamer/dreamer_v2_single_task_raw_actions.py --mode here_no_doodad --exp_prefix <> --env <env name>

Robosuite

To train RAPS with Dreamer on an Robosuite Lift

python experiments/robosuite/dreamer/dreamer_v2_single_task_primitives_lift.py --mode here_no_doodad --exp_prefix <>

To train Raw Actions with Dreamer on an Robosuite Lift

python experiments/robosuite/dreamer/dreamer_v2_single_task_raw_actions_lift.py --mode here_no_doodad --exp_prefix <>

To train RAPS with Dreamer on an Robosuite Door

python experiments/robosuite/dreamer/dreamer_v2_single_task_primitives_door.py --mode here_no_doodad --exp_prefix <>

To train Raw Actions with Dreamer on an Robosuite Door

python experiments/robosuite/dreamer/dreamer_v2_single_task_raw_actions_door.py --mode here_no_doodad --exp_prefix <>

Learning Curve visualization

cd /path/to/raps/rlkit
python ../viskit/viskit/frontend.py data/<exp_prefix> //open localhost:5000 to view
Owner
Murtaza Dalal
Passionate about Machine Learning, Computer Vision, Robotics, and AI. Interested in seamlessly integrating software and hardware into into intelligent systems.
Murtaza Dalal
Deep Learning Models for Causal Inference

Extensive tutorials for learning how to build deep learning models for causal inference using selection on observables in Tensorflow 2.

Bernard J Koch 151 Dec 31, 2022
A collection of papers about Transformer in the field of medical image analysis.

A collection of papers about Transformer in the field of medical image analysis.

Junyu Chen 377 Jan 05, 2023
3rd Place Solution for ICCV 2021 Workshop SSLAD Track 3A - Continual Learning Classification Challenge

Online Continual Learning via Multiple Deep Metric Learning and Uncertainty-guided Episodic Memory Replay 3rd Place Solution for ICCV 2021 Workshop SS

Rifki Kurniawan 6 Nov 10, 2022
Using Python to Play Cyberpunk 2077

CyberPython 2077 Using Python to Play Cyberpunk 2077 This repo will contain code from the Cyberpython 2077 video series on Youtube (youtube.

Harrison 118 Oct 18, 2022
hySLAM is a hybrid SLAM/SfM system designed for mapping

HySLAM Overview hySLAM is a hybrid SLAM/SfM system designed for mapping. The system is based on ORB-SLAM2 with some modifications and refactoring. Raú

Brian Hopkinson 15 Oct 10, 2022
Implementation for the EMNLP 2021 paper "Interactive Machine Comprehension with Dynamic Knowledge Graphs".

Interactive Machine Comprehension with Dynamic Knowledge Graphs Implementation for the EMNLP 2021 paper. Dependencies apt-get -y update apt-get instal

Xingdi (Eric) Yuan 19 Aug 23, 2022
Neural network pruning for finding a sparse computational model for controlling a biological motor task.

MothPruning Scientific Overview Originally inspired by biological nervous systems, deep neural networks (DNNs) are powerful computational tools for mo

Olivia Thomas 0 Dec 14, 2022
NEO: Non Equilibrium Sampling on the orbit of a deterministic transform

NEO: Non Equilibrium Sampling on the orbit of a deterministic transform Description of the code This repo describes the NEO estimator described in the

0 Dec 01, 2021
Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

DV Lab 115 Dec 23, 2022
This repo in the implementation of EMNLP'21 paper "SPARQLing Database Queries from Intermediate Question Decompositions" by Irina Saparina, Anton Osokin

SPARQLing Database Queries from Intermediate Question Decompositions This repo is the implementation of the following paper: SPARQLing Database Querie

Yandex Research 20 Dec 19, 2022
Trainable PyTorch reproduction of AlphaFold 2

OpenFold A faithful PyTorch reproduction of DeepMind's AlphaFold 2. Features OpenFold carefully reproduces (almost) all of the features of the origina

AQ Laboratory 1.7k Dec 29, 2022
Fast image augmentation library and an easy-to-use wrapper around other libraries

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

11.4k Jan 09, 2023
Implementation of association rules mining algorithms (Apriori|FPGrowth) using python.

Association Rules Mining Using Python Implementation of association rules mining algorithms (Apriori|FPGrowth) using python. As a part of hw1 code in

Pre 2 Nov 10, 2021
PIXIE: Collaborative Regression of Expressive Bodies

PIXIE: Collaborative Regression of Expressive Bodies [Project Page] This is the official Pytorch implementation of PIXIE. PIXIE reconstructs an expres

Yao Feng 331 Jan 04, 2023
Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning. CVPR 2018

Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning Tensorflow code and models for the paper: Large Scale Fine-Grained Categ

Yin Cui 187 Oct 01, 2022
audioLIME: Listenable Explanations Using Source Separation

audioLIME This repository contains the Python package audioLIME, a tool for creating listenable explanations for machine learning models in music info

Institute of Computational Perception 27 Dec 01, 2022
Wider-Yolo Kütüphanesi ile Yüz Tespit Uygulamanı Yap

WIDER-YOLO : Yüz Tespit Uygulaması Yap Wider-Yolo Kütüphanesinin Kullanımı 1. Wider Face Veri Setini İndir Train Dataset Val Dataset Test Dataset Not:

Kadir Nar 6 Aug 22, 2022
Code for the Lovász-Softmax loss (CVPR 2018)

The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks Maxim Berman, Amal Ranne

Maxim Berman 1.3k Jan 04, 2023
Toolbox to analyze temporal context invariance of deep neural networks

PyTCI A toolbox that estimates the integration window of a sensory response using the "Temporal Context Invariance" paradigm (TCI). The TCI method Int

4 Oct 23, 2022
FedTorch is an open-source Python package for distributed and federated training of machine learning models using PyTorch distributed API

FedTorch is a generic repository for benchmarking different federated and distributed learning algorithms using PyTorch Distributed API.

Machine Learning and Optimization Lab @PennState 136 Dec 23, 2022