This repository contains source code for the Situated Interactive Language Grounding (SILG) benchmark

Related tags

Deep Learningsilg
Overview

SILG

This repository contains source code for the Situated Interactive Language Grounding (SILG) benchmark. If you find this work helpful, please consider citing this work:

@inproceedings{ zhong2021silg,
  title={ {SILG}: The Multi-environment Symbolic InteractiveLanguage Grounding Benchmark },
  author={ Victor Zhong and Austin W. Hanjie and Karthik Narasimhan and Luke Zettlemoyer },
  booktitle={ NeurIPS },
  year={ 2021 }
}

Please also consider citing the individual tasks included in SILG. They are RTFM, Messenger, NetHack Learning Environment, AlfWorld, and Touchdown.

RTFM

RTFM

Messenger

Messenger

SILGNethack

SILGNethack

ALFWorld

ALFWorld

SILGSymTouchdown

SILGSymTouchdown

How to install

You have to install the individual environments in order for SILG to work. The GitHub repository for each environment are found at

Our dockerfile also provides an example of how to install the environments in Ubuntu. You can also try using our install_envs.sh, which has only been tested in Ubuntu and MacOS.

bash install_envs.sh

Once you have installed the individual environments, install SILG as follows

pip install -r requirements.txt
pip install -e .

Some environments have (potentially a large quantity of) data files. Please download these via

bash download_env_data.sh  # if you do not want to use VisTouchdown, feel free to comment out its very large feature file

As a part of this download, we will symlink a ./cache directory from ./mycache. SILG environments will pull data files from this directory. If you are on NFS, you might want to move mycache to local disk and then relink the cache directory to avoid hitting NFS.

Docker

We provide a Docker container for this project. You can build the Docker image via docker build -t vzhong/silg . -f docker/Dockerfile. Alternatively you can pull my build from docker pull vzhong/silg. This contains the environments as well as SILG, but doesn't contain the large data download. You will still have to download the environment data and then mount the cache folder to the container. You may need to specify --platform linux/amd64 to Docker if you are running a M1 Mac.

Because some of the environments require that you install them first before downloading their data files, you want to download using the Docker container as well. You can do

docker run --rm --user "$(id -u):$(id -g)" -v $PWD/download_env_data.sh:/opt/silg/download_env_data.sh -v $PWD/mycache:/opt/silg/cache vzhong/silg bash download_env_data.sh

Once you have downloaded the environment data, you can use the container by doing something like

docker run --rm --user "$(id -u):$(id -g)" -it -v $PWD/mycache:/opt/silg/cache vzhong/silg /bin/bash

Visualizing environments

We provide a script to play SILG environments in the terminal. You can access it via

silg_play --env silg:rtfm_train_s1-v0  # use -h to see options

# docker variant
docker run --rm -it -v $PWD/mycache:/opt/silg/cache vzhong/silg silg_play --env silg:rtfm_train_s1-v0

These recordings are shown at the start of this document and are created using asciinema.

How to run experiments

The entrypoint to experiments is run_exp.py. We provide a slurm script to run experiments in launch.py. These scripts can also run jobs locally (e.g. without slurm). For example, to run RTFM:

python launch.py --local --envs rtfm

You can also log to WanDB with the --wandb option. For more, use the -h flag.

How to add a new environment

First, create a wrapper class in silg/envs/ .py . This wrapper will wrap the real environment and provide APIs used by the baseline models and the training script. silg/envs/rtfm.py contains an example of how to do this for RTFM. Once you have made the wrapper, don't forget to include its file in silg/envs/__init__.py.

The wrapper class must subclass silg.envs.base.SILGEnv and implement:

# return the list of text fields in the observation space
def get_text_fields(self):
    ...

# return max number of actions
def get_max_actions(self):
    ...

# return observation space
def get_observation_space(self):
    ...

# resets the environment
def my_reset(self):
    ...

# take a step in the environment
def my_step(self, action):
    ...

Additionally, you may want to implemnt rendering functions such as render_grid, parse_user_action, and get_user_actions so that it can be played with silg_play.

Note There is an implementation detail right now in that the Torchbeast code considers a "win" to be equivalent to the environment returning a reward >0.8. We hope to change this in the future (likely by adding another tensor field denoting win state) but please keep this in mind when implementing your environment. You likely want to keep the reward between -1 and +1, which high rewards >0.8 reserved for winning if you would like to use the training code as-is.

Changelog

Version 1.0

Initial release.

Owner
Victor Zhong
I am a PhD student at the University of Washington. Formerly Salesforce Research / MetaMind, @stanfordnlp, and ECE at UToronto.
Victor Zhong
HNN: Human (Hollywood) Neural Network

HNN: Human (Hollywood) Neural Network Learn the top 1000 actors on IMDB with your very own low cost, highly parallel, CUDAless biological neural netwo

Madhava Jay 0 Dec 21, 2021
Unsupervised phone and word segmentation using dynamic programming on self-supervised VQ features.

Unsupervised Phone and Word Segmentation using Vector-Quantized Neural Networks Overview Unsupervised phone and word segmentation on speech data is pe

Herman Kamper 13 Dec 11, 2022
A real-time approach for mapping all human pixels of 2D RGB images to a 3D surface-based model of the body

DensePose: Dense Human Pose Estimation In The Wild Rıza Alp Güler, Natalia Neverova, Iasonas Kokkinos [densepose.org] [arXiv] [BibTeX] Dense human pos

Meta Research 6.4k Jan 01, 2023
DeepStruc is a Conditional Variational Autoencoder which can predict the mono-metallic nanoparticle from a Pair Distribution Function.

ChemRxiv | [Paper] XXX DeepStruc Welcome to DeepStruc, a Deep Generative Model (DGM) that learns the relation between PDF and atomic structure and the

Emil Thyge Skaaning Kjær 13 Aug 01, 2022
Matlab Python Heuristic Battery Opt - SMOP conversion and manual conversion

SMOP is Small Matlab and Octave to Python compiler. SMOP translates matlab to py

Tom Xu 1 Jan 12, 2022
Implementation of the paper "Shapley Explanation Networks"

Shapley Explanation Networks Implementation of the paper "Shapley Explanation Networks" at ICLR 2021. Note that this repo heavily uses the experimenta

68 Dec 27, 2022
Code for "Multi-Compound Transformer for Accurate Biomedical Image Segmentation"

News The code of MCTrans has been released. if you are interested in contributing to the standardization of the medical image analysis community, plea

97 Jan 05, 2023
Learning to Predict Gradients for Semi-Supervised Continual Learning

Learning to Predict Gradients for Semi-Supervised Continual Learning Code for project: "Learning to Predict Gradients for Semi-Supervised Continual Le

Yan Luo 2 Mar 05, 2022
Pytorch implementation for "Implicit Feature Alignment: Learn to Convert Text Recognizer to Text Spotter".

Implicit Feature Alignment: Learn to Convert Text Recognizer to Text Spotter This is a pytorch-based implementation for paper Implicit Feature Alignme

wangtianwei 61 Nov 12, 2022
Gans-in-action - Companion repository to GANs in Action: Deep learning with Generative Adversarial Networks

GANs in Action by Jakub Langr and Vladimir Bok List of available code: Chapter 2: Colab, Notebook Chapter 3: Notebook Chapter 4: Notebook Chapter 6: C

GANs in Action 914 Dec 21, 2022
Sync2Gen Code for ICCV 2021 paper: Scene Synthesis via Uncertainty-Driven Attribute Synchronization

Sync2Gen Code for ICCV 2021 paper: Scene Synthesis via Uncertainty-Driven Attribute Synchronization 0. Environment Environment: python 3.6 and cuda 10

Haitao Yang 62 Dec 30, 2022
A code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Vanderhaeghe, and Yotam Gingold from SIGGRAPH Asia 2020.

A Benchmark for Rough Sketch Cleanup This is the code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Va

33 Dec 18, 2022
Code for the paper 'A High Performance CRF Model for Clothes Parsing'.

Clothes Parsing Overview This code provides an implementation of the research paper: A High Performance CRF Model for Clothes Parsing Edgar Simo-S

Edgar Simo-Serra 119 Nov 21, 2022
Incorporating Transformer and LSTM to Kalman Filter with EM algorithm

Deep learning based state estimation: incorporating Transformer and LSTM to Kalman Filter with EM algorithm Overview Kalman Filter requires the true p

zshicode 57 Dec 27, 2022
MRQy is a quality assurance and checking tool for quantitative assessment of magnetic resonance imaging (MRI) data.

Front-end View Backend View Table of Contents Description Prerequisites Running Basic Information Measurements User Interface Feedback and usage Descr

Center for Computational Imaging and Personalized Diagnostics 58 Dec 02, 2022
ADB-IP-ROTATION - Use your mobile phone to gain a temporary IP address using ADB and data tethering

ADB IP ROTATE This an Python script based on Android Debug Bridge (adb) shell sc

Dor Bismuth 2 Jul 12, 2022
Dataloader tools for language modelling

Installation: pip install lm_dataloader Design Philosophy A library to unify lm dataloading at large scale Simple interface, any tokenizer can be inte

5 Mar 25, 2022
Hierarchical Time Series Forecasting with a familiar API

scikit-hts Hierarchical Time Series with a familiar API. This is the result from not having found any good implementations of HTS on-line, and my work

Carlo Mazzaferro 204 Dec 17, 2022
A python interface for training Reinforcement Learning bots to battle on pokemon showdown

The pokemon showdown Python environment A Python interface to create battling pokemon agents. poke-env offers an easy-to-use interface for creating ru

Haris Sahovic 184 Dec 30, 2022