Code accompanying the NeurIPS 2021 paper "Generating High-Quality Explanations for Navigation in Partially-Revealed Environments"

Overview

Generating High-Quality Explanations for Navigation in Partially-Revealed Environments

This work presents an approach to explainable navigation under uncertainty.

This is the code release associated with the NeurIPS 2021 paper Generating High-Quality Explanations for Navigation in Partially-Revealed Environments. In this repository, we provide all the code, data, and simulation environments necessary to reproduce our results. These results include (1) training, (2) large-scale evaluation, (3) explaining robot behavior, and (4) interveneing-via-explaining. Here we show an example of an explanation automatically generated by our approach in one of our simulated environments, in which the green path on the ground indicates a likely route to the goal:

An example explanation automatically generated by our approach in our simulated 'Guided Maze' environment.

@inproceedings{stein2021xailsp,
  title = {Generating High-Quality Explanations for Navigation in Partially-Revealed Environments},
  author = {Gregory J. Stein},
  booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
  year = 2021,
  keywords = {explainability; planning under uncertainty; subgoal-based planning; interpretable-by-design},
}

Getting Started

We use Docker (with the Nvidia runtime) and GNU Make to run our code, so both are required to run our code. First, docker must be installed by following the official docker install guide (the official docker install guide). Second, our docker environments will require that the NVIDIA docker runtime is installed (via nvidia-container-toolkit. Follow the install instructions on the nvidia-docker GitHub page to get it.

Generating Explanations

We have provided a make target that generates two explanations that correspond to those included in the paper. Running the following make targets in a command prompt will generate these:

# Build the repo
make build
# Generate explanation plots
make xai-explanations

For each, the planner is run for a set number of steps and an explanation is generated by the agent and its learned model to justify its behavior compared to what the oracle planner specifies as the action known to lead to the unseen goal. A plot will be generated for each of the explanations and added to ./data/explanations.

Re-Running Results Experiments

We also provide targets for re-running the results for each of our simulated experimental setups:

# Build the repo
make build

# Ensure data timestamps are in the correct order
# Only necessary on the first pass
make fix-target-timestamps

# Maze Environments
make xai-maze EXPERIMENT_NAME=base_allSG
make xai-maze EXPERIMENT_NAME=base_4SG SP_LIMIT_NUM=4
make xai-maze EXPERIMENT_NAME=base_0SG SP_LIMIT_NUM=0

# University Building (floorplan) Environments
make xai-floorplan EXPERIMENT_NAME=base_allSG
make xai-floorplan EXPERIMENT_NAME=base_4SG SP_LIMIT_NUM=4
make xai-floorplan EXPERIMENT_NAME=base_0SG SP_LIMIT_NUM=0

# Results Plotting
make xai-process-results

(This can also be done by running ./run.sh)

This code will build the docker container, do nothing (since the results already exist), and then print out the results. GNU Make is clever: it recognizes that the plots already exist in their respective locations for each of the experiments and, as such, it does not run any code. To save on space to meet the 100MB size requirements, the results images for each experiment have been downsampled to thumbnail size. If you would like to reproduce any of our results, delete the plots of interest in the results folder and rerun the above code; make will detect which plots have been deleted and reproduce them. All results plots can be found in their respective folder in ./data/results.

The make commands above can be augmented to run the trials in parallel, by adding -jN (where N is the number of trials to be run in parallel) to each of the Make commands. On our NVIDIA 2060 SUPER, we are limited by GPU RAM, and so we limit to N=4. Running with higher N is possible but sometimes our simulator tries to allocate memory that does not exist and will crash, requiring that the trial be rerun. It is in principle possible to also generate data and train the learned planners from scratch, though (for now) this part of the pipeline has not been as extensively tested; data generation consumes roughly 1.5TB of disk space, so be sure to have that space available if you wish to run that part of the pipeline. Even with 4 parallel trials, we estimate that running all the above code from scratch (including data generation, training, and evaluation) will take roughly 2 weeks, half of which is evaluation.

Code Organization

The src folder contains a number of python packages necessary for this paper. Most of the algorithmic code that reflects our primary research contributions is predominantly spread across three files:

  • xai.planners.subgoal_planner The SubgoalPlanner class is the one which encapsulates much of the logic for deciding where the robot should go including its calculation of which action it should take and what is the "next best" action. This class is the primary means by which the agent collects information and dispatches it elsewhere to make decisions.
  • xai.learning.models.exp_nav_vis_lsp The ExpVisNavLSP defines the neural network along with its loss terms used to train it. Also critical are the functions included in this and the xai.utils.data file for "updating" the policies to reflect the newly estimated subgoal properties even after the network has been retrained. This class also includes the functionality for computing the delta subgoal properties that primarily define our counterfactual explanations. Virtuall all of this functionality heavily leverages PyTorch, which makes it easy to compute the gradients of the expected cost for each of the policies.
  • xai.planners.explanation This file defines the Explanation class that stores the subgoal properties and their deltas (computed via ExpVisNavLSP) and composes these into a natural language explanation and a helpful visualization showing all the information necessary to understand the agent's decision-making process.
Owner
RAIL Group @ George Mason University
Code for the Robotic Anticipatory Intelligence & Learning (RAIL) Group at George Mason University
RAIL Group @ George Mason University
Implementation of "Efficient Regional Memory Network for Video Object Segmentation" (Xie et al., CVPR 2021).

RMNet This repository contains the source code for the paper Efficient Regional Memory Network for Video Object Segmentation. Cite this work @inprocee

Haozhe Xie 76 Dec 14, 2022
ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs

(Comet-) ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs Paper Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sa

AI2 152 Dec 27, 2022
PyTorch implementation of Convolutional Neural Fabrics http://arxiv.org/abs/1606.02492

PyTorch implementation of Convolutional Neural Fabrics arxiv:1606.02492 There are some minor differences: The raw image is first convolved, to obtain

Anuvabh Dutt 25 Dec 22, 2021
【Arxiv】Exploring Separable Attention for Multi-Contrast MR Image Super-Resolution

SANet Exploring Separable Attention for Multi-Contrast MR Image Super-Resolution Dependencies numpy==1.18.5 scikit_image==0.16.2 torchvision==0.8.1 to

36 Jan 05, 2023
Official implementation for "Image Quality Assessment using Contrastive Learning"

Image Quality Assessment using Contrastive Learning Pavan C. Madhusudana, Neil Birkbeck, Yilin Wang, Balu Adsumilli and Alan C. Bovik This is the offi

Pavan Chennagiri 67 Dec 30, 2022
A Tensorfflow implementation of Attend, Infer, Repeat

Attend, Infer, Repeat: Fast Scene Understanding with Generative Models This is an unofficial Tensorflow implementation of Attend, Infear, Repeat (AIR)

Adam Kosiorek 82 May 27, 2022
Aalto-cs-msc-theses - Listing of M.Sc. Theses of the Department of Computer Science at Aalto University

Aalto-CS-MSc-Theses Listing of M.Sc. Theses of the Department of Computer Scienc

Jorma Laaksonen 3 Jan 27, 2022
Cartoon-StyleGan2 🙃 : Fine-tuning StyleGAN2 for Cartoon Face Generation

Fine-tuning StyleGAN2 for Cartoon Face Generation

Jihye Back 520 Jan 04, 2023
BEAMetrics: Benchmark to Evaluate Automatic Metrics in Natural Language Generation

BEAMetrics: Benchmark to Evaluate Automatic Metrics in Natural Language Generation Installing The Dependencies $ conda create --name beametrics python

7 Jul 04, 2022
Facial recognition project

Facial recognition project documentation Project introduction This project is developed by linuxu. It is a face model recognition project developed ba

Jefferson 2 Dec 04, 2022
ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin et al., 2020).

ReConsider ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin

Facebook Research 47 Jul 26, 2022
My course projects for the 2021 Spring Machine Learning course at the National Taiwan University (NTU)

ML2021Spring There are my projects for the 2021 Spring Machine Learning course at the National Taiwan University (NTU) Course Web : https://speech.ee.

Ding-Li Chen 15 Aug 29, 2022
Hyperparameter Optimization for TensorFlow, Keras and PyTorch

Hyperparameter Optimization for Keras Talos • Key Features • Examples • Install • Support • Docs • Issues • License • Download Talos radically changes

Autonomio 1.6k Dec 15, 2022
Official PyTorch implementation of "Preemptive Image Robustification for Protecting Users against Man-in-the-Middle Adversarial Attacks" (AAAI 2022)

Preemptive Image Robustification for Protecting Users against Man-in-the-Middle Adversarial Attacks This is the code for reproducing the results of th

2 Dec 27, 2021
Predict halo masses from simulations via graph neural networks

HaloGraphNet Predict halo masses from simulations via Graph Neural Networks. Given a dark matter halo and its galaxies, creates a graph with informati

Pablo Villanueva Domingo 20 Nov 15, 2022
Using Random Effects to Account for High-Cardinality Categorical Features and Repeated Measures in Deep Neural Networks

LMMNN Using Random Effects to Account for High-Cardinality Categorical Features and Repeated Measures in Deep Neural Networks This is the working dire

Giora Simchoni 10 Nov 02, 2022
Wide Residual Networks (WideResNets) in PyTorch

Wide Residual Networks (WideResNets) in PyTorch WideResNets for CIFAR10/100 implemented in PyTorch. This implementation requires less GPU memory than

Jason Kuen 296 Dec 27, 2022
Count GitHub Stars ⭐

Count GitHub Stars per Day ⭐ Track GitHub stars per day over a date range to measure the open-source popularity of different repositories. Requirement

Ultralytics 20 Nov 20, 2022
CATE: Computation-aware Neural Architecture Encoding with Transformers

CATE: Computation-aware Neural Architecture Encoding with Transformers Code for paper: CATE: Computation-aware Neural Architecture Encoding with Trans

16 Dec 27, 2022
This repository contains the scripts for downloading and validating scripts for the documents

HC4: HLTCOE CLIR Common-Crawl Collection This repository contains the scripts for downloading and validating scripts for the documents. Document ids,

JHU Human Language Technology Center of Excellence 6 Jun 07, 2022