Continual World is a benchmark for continual reinforcement learning

Overview

Continual World

Continual World is a benchmark for continual reinforcement learning. It contains realistic robotic tasks which come from MetaWorld.

The core of our benchmark is CW20 sequence, in which 20 tasks are run, each with budget of 1M steps.

We provide the complete source code for the benchmark together with the tested algorithms implementations and code for producing result tables and plots.

See also the paper and the website.

CW20 sequence

Installation

You can either install directly in Python environment (like virtualenv or conda), or build containers -- Docker or Singularity.

Standard installation (directly in environment)

First, you'll need MuJoCo simulator. Please follow the instructions from mujoco_py package. As MuJoCo has been made freely available, you can obtain a free license here.

Next, go to the main directory of this repo and run

pip install .

Alternatively, if you want to install in editable mode, run

pip install -e .

Docker image

  • To build the image with continualworld package installed inside, run docker build . -f assets/Dockerfile -t continualworld

  • To build the image WITHOUT the continualworld package but with all the dependencies installed, run docker build . -f assets/Dockerfile -t continualworld --build-arg INSTALL_CW_PACKAGE=false

When the image is ready, you can run

docker run -it continualworld bash

to get inside the image.

Singularity image

  • To build the image with continualworld package installed inside, run singularity build continualworld.sif assets/singularity.def

  • To build the image WITHOUT the continualworld package but with all the dependencies installed, run singularity build continualworld.sif assets/singularity_only_deps.def

When the image is ready, you can run

singularity shell continualworld.sif

to get inside the image.

Running

You can run single task, continual learning or multi-task learning experiments with run_single.py, run_cl.py , run_mt.py scripts, respectively.

To see available script arguments, run with --help option, e.g.

python3 run_single.py --help

Examples

Below are given example commands that will run experiments with a very limited scale.

Single task

python3 run_single.py --seed 0 --steps 2e3 --log_every 250 --task hammer-v1 --logger_output tsv tensorboard

Continual learning

python3 run_cl.py --seed 0 --steps_per_task 2e3 --log_every 250 --tasks CW20 --cl_method ewc --cl_reg_coef 1e4 --logger_output tsv tensorboard

Multi-task learning

python3 run_mt.py --seed 0 --steps_per_task 2e3 --log_every 250 --tasks CW10 --use_popart True --logger_output tsv tensorboard

Reproducing the results from the paper

Commands to run experiments that reproduce main results from the paper can be found in examples/paper_cl_experiments.sh, examples/paper_mt_experiments.sh and examples/paper_single_experiments.sh. Because of number of different runs that these files contain, it is infeasible to just run it in sequential manner. We hope though that these files will be helpful because they precisely specify what needs to be run.

After the logs from runs are gathered, you can produce tables and plots - see the section below.

Producing result tables and plots

After you've run experiments and you have saved logs, you can run the script to produce result tables and plots:

python produce_results.py --cl_logs examples/logs/cl --mtl_logs examples/logs/mtl --baseline_logs examples/logs/baseline

In this command, respective arguments should be replaced for paths to directories containing logs from continual learning experiments, multi-task experiments and baseline (single-task) experiments. Each of these should be a directory inside which there are multiple experiments, for different methods and/or seeds. You can see the directory structure in the example logs included in the command above.

Results will be produced and saved on default to the results directory.

Alternatively, check out nb_produce_results.ipynb notebook to see plots and tables in the notebook.

Download our saved logs and produce results

You can download logs of experiments to reproduce paper's results from here. Then unzip the file and run

python produce_results.py --cl_logs saved_logs/cl --mtl_logs saved_logs/mt --baseline_logs saved_logs/single

to produce tables and plots.

As a result, a csv file with results will be produced, as well as the plots, like this one (and more!):

average performance

Full output can be found here.

Acknowledgements

Continual World heavily relies on MetaWorld.

The implementation of SAC used in our code comes from Spinning Up in Deep RL.

Our research was supported by the PLGrid infrastructure.

Our experiments were managed using Neptune.

[ICML 2021, Long Talk] Delving into Deep Imbalanced Regression

Delving into Deep Imbalanced Regression This repository contains the implementation code for paper: Delving into Deep Imbalanced Regression Yuzhe Yang

Yuzhe Yang 568 Dec 30, 2022
Adversarial examples to the new ConvNeXt architecture

Adversarial examples to the new ConvNeXt architecture To get adversarial examples to the ConvNeXt architecture, run the Colab: https://github.com/stan

Stanislav Fort 19 Sep 18, 2022
Fewshot-face-translation-GAN - Generative adversarial networks integrating modules from FUNIT and SPADE for face-swapping.

Few-shot face translation A GAN based approach for one model to swap them all. The table below shows our priliminary face-swapping results requiring o

768 Dec 24, 2022
[ACM MM 2021] Joint Implicit Image Function for Guided Depth Super-Resolution

Joint Implicit Image Function for Guided Depth Super-Resolution This repository contains the code for: Joint Implicit Image Function for Guided Depth

hawkey 78 Dec 27, 2022
Code for paper "Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation" EMNLP 2021

The repo provides the code for paper "Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation" EMNLP 2

Yuning Mao 18 May 24, 2022
This is the solution for 2nd rank in Kaggle competition: Feedback Prize - Evaluating Student Writing.

Feedback Prize - Evaluating Student Writing This is the solution for 2nd rank in Kaggle competition: Feedback Prize - Evaluating Student Writing. The

Udbhav Bamba 41 Dec 14, 2022
Estimation of human density in a closed space using deep learning.

Siemens HOLLZOF challenge - Human Density Estimation Add project description here. Installing Dependencies: Install Python3 either system-wide, user-w

3 Aug 08, 2021
Gems & Holiday Package Prediction

Predictive_Modelling Gems & Holiday Package Prediction This project is based on 2 cases studies : Gems Price Prediction and Holiday Package prediction

Avnika Mehta 1 Jan 27, 2022
COPA-SSE contains crowdsourced explanations for the Balanced COPA dataset

COPA-SSE Repository for COPA-SSE: Semi-Structured Explanations for Commonsense Reasoning. COPA-SSE contains crowdsourced explanations for the Balanced

Ana Brassard 5 Jul 31, 2022
Object detection, 3D detection, and pose estimation using center point detection:

Objects as Points Object detection, 3D detection, and pose estimation using center point detection: Objects as Points, Xingyi Zhou, Dequan Wang, Phili

Xingyi Zhou 6.7k Jan 03, 2023
An implementation of the WHATWG URL Standard in JavaScript

whatwg-url whatwg-url is a full implementation of the WHATWG URL Standard. It can be used standalone, but it also exposes a lot of the internal algori

314 Dec 28, 2022
Deep Learning Specialization by Andrew Ng, deeplearning.ai.

Deep Learning Specialization on Coursera Master Deep Learning, and Break into AI This is my personal projects for the course. The course covers deep l

Engen 1.5k Jan 07, 2023
GBK-GNN: Gated Bi-Kernel Graph Neural Networks for Modeling Both Homophily and Heterophily

GBK-GNN: Gated Bi-Kernel Graph Neural Networks for Modeling Both Homophily and Heterophily Abstract Graph Neural Networks (GNNs) are widely used on a

10 Dec 20, 2022
Speech Recognition is an important feature in several applications used such as home automation, artificial intelligence

Speech Recognition is an important feature in several applications used such as home automation, artificial intelligence, etc. This article aims to provide an introduction on how to make use of the S

RISHABH MISHRA 1 Feb 13, 2022
YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with ONNX, TensorRT, ncnn, and OpenVINO supported.

Introduction YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and ind

7.7k Jan 03, 2023
I explore rock vs. mine prediction using a SONAR dataset

I explore rock vs. mine prediction using a SONAR dataset. Using a Logistic Regression Model for my prediction algorithm, I intend on predicting what an object is based on supervised learning.

Jeff Shen 1 Jan 11, 2022
TLDR; Train custom adaptive filter optimizers without hand tuning or extra labels.

AutoDSP TLDR; Train custom adaptive filter optimizers without hand tuning or extra labels. About Adaptive filtering algorithms are commonplace in sign

Jonah Casebeer 48 Sep 19, 2022
Code and data of the ACL 2021 paper: Few-Shot Text Ranking with Meta Adapted Synthetic Weak Supervision

MetaAdaptRank This repository provides the implementation of meta-learning to reweight synthetic weak supervision data described in the paper Few-Shot

THUNLP 5 Jun 16, 2022
DeepLab resnet v2 model in pytorch

pytorch-deeplab-resnet DeepLab resnet v2 model implementation in pytorch. The architecture of deepLab-ResNet has been replicated exactly as it is from

Isht Dwivedi 601 Dec 22, 2022
Personals scripts using ageitgey/face_recognition

HOW TO USE pip3 install requirements.txt Add some pictures of known people in the folder 'people' : a) Create a folder called by the name of the perso

Antoine Bollengier 1 Jan 06, 2022