Code for "Solving Graph-based Public Good Games with Tree Search and Imitation Learning"

Overview

Code for "Solving Graph-based Public Good Games with Tree Search and Imitation Learning"

This is the code for the paper Solving Graph-based Public Good Games with Tree Search and Imitation Learning by Victor-Alexandru Darvariu, Stephen Hailes and Mirco Musolesi, presented at NeurIPS 2021. If you use this code, please consider citing:

@inproceedings{darvariu_solving_2021,
  title = {Solving Graph-based Public Good Games with Tree Search and Imitation Learning},
  author = {Darvariu, Victor-Alexandru and Hailes, Stephen and Musolesi, Mirco},
  booktitle = {35th Conference on Neural Information Processing Systems (NeurIPS 2021)},
  year={2021},
}

License

MIT.

Prerequisites

Currently tested on Linux and MacOS (specifically, CentOS 7.4.1708 and Mac OS Big Sur 11.2.3), can also be adapted to Windows through WSL. The host machine requires NVIDIA CUDA toolkit version 9.0 or above (tested with NVIDIA driver version 384.81).

Makes heavy use of Docker, see e.g. here for how to install. Tested with Docker 19.03. The use of Docker largely does away with dependency and setup headaches, making it significantly easier to reproduce the reported results.

Configuration

The Docker setup uses Unix groups to control permissions. You can reuse an existing group that you are a member of, or create a new group groupadd -g GID GNAME and add your user to it usermod -a -G GNAME MYUSERNAME.

Create a file relnet.env at the root of the project (see relnet_example.env) and adjust the paths within: this is where some data generated by the container will be stored. Also specify the group ID and name created / selected above.

Add the following lines to your .bashrc, replacing /home/john/git/relnet with the path where the repository is cloned.

export RN_SOURCE_DIR='/home/john/git/relnet'
set -a
. $RN_SOURCE_DIR/relnet.env
set +a

export PATH=$PATH:$RN_SOURCE_DIR/scripts

Make the scripts executable (e.g. chmod u+x scripts/*) the first time after cloning the repository, and run apply_permissions.sh in order to create and permission the necessary directories.

Managing the containers

Some scripts are provided for convenience. To build the containers (note, this will take a significant amount of time e.g. 2 hours, as some packages are built from source):

update_container.sh

To start them:

manage_container_gpu.sh up
manage_container.sh up

To stop them:

manage_container_gpu.sh stop
manage_container.sh stop

To purge the queue and restart the containers (useful for killing tasks that were launched):

purge_and_restart.sh

Adjusting the number of workers and threads

To take maximum advantage of your machine's capacity, you may want to tweak the number of threads for the GPU and CPU workers. This configuration is provided in projectconfig.py. Additionally, you may want to enforce certain memory limits for your workers to avoid OOM errors. This can be tweaked in docker-compose.yml and manage_container_gpu.sh.

It is also relatively straightforward to add more workers from different machines you control. For this, you will need to mount the volumes on networked-attached storage (i.e., make sure paths provided in relnet.env are network-accessible) and adjust the location of backend and queue in projectconfig.py to a network location instead of localhost. On the other machines, only start the worker container (see e.g. manage_container.sh).

Setting up graph data

Synthetic data will be automatically generated when the experiments are ran and stored to $RN_EXPERIMENT_DIR/stored_graphs.

Accessing the services

There are several services running on the manager node.

  • Jupyter notebook server: http://localhost:8888
  • Flower for queue statistics: http://localhost:5555
  • Tensorboard (currently disabled due to its large memory footprint): http://localhost:6006
  • RabbitMQ management: http://localhost:15672

The first time Jupyter is accessed it will prompt for a token to enable password configuration, it can be grabbed by running docker exec -it relnet-manager /bin/bash -c "jupyter notebook list".

Accessing experiment data and results database

Experiment data and results are stored in part as files (under your configured $RN_EXPERIMENT_DATA_DIR) as well as in a MongoDB database. To access the MongoDB database with a GUI, you can use a MongoDB client such as Robo3T and point it to http://localhost:27017.

Some functionality is provided in relnet/evaluation/storage.py to insert and retrieve data, you can use it in e.g. analysis notebooks.

Running experiments

Experiments are launched from the manager container and processed (in a parallel way) by the workers. The file relnet/evaluation/experiment_conditions.py contains the configuration for the experiments reported in the paper, but you may modify e.g. agents, objective functions, hyperparameters etc. to suit your needs.

Then, you can launch all the experiments as follows:

Part 1: Hyperparameter optimization & evaluation for all aproaches except GIL

run_part1.sh

Part 2: Data collection for GIL using the UCT algorithm

run_part2.sh

Part 3: Training & hyperparameter optimization for GIL

run_part3.sh

Monitoring experiments

  • You can navigate to http://localhost:5555 for the Flower interface which shows the progress of processing tasks in the queue. You may also check logs for both manager and worker at $RN_EXPERIMENT_DATA_DIR/logs.

Reproducing the results

Jupyter notebooks are used to perform the data analysis and produce tables and figures. Navigate to http://localhost:8888, then notebooks folder.

All tables and result figures can be obtained by opening the GGNN_Evaluation.ipynb notebook, selecting the py3-relnet kernel and run all cells. Resulting .pdf figures and .tex tables can be found at $RN_EXPERIMENT_DIR/aggregate. There are additional notebooks provided for analyzing the results of hyperparameter optimization:

  • GGNN_Hyperparam_Optimisation.ipynb for UCT
  • GGNN_Hyperparam_Optimisation_IL.ipynb for GIL

Problems with jupyter kernel

In case the py3-relnet kernel is not found, try reinstalling the kernel by running docker exec -it -u 0 relnet-manager /bin/bash -c "source activate relnet-cenv; python -m ipykernel install --user --name relnet --display-name py3-relnet"

Owner
Victor-Alexandru Darvariu
Doctoral Student at University College London and The Alan Turing Institute.
Victor-Alexandru Darvariu
Code and models for "Rethinking Deep Image Prior for Denoising" (ICCV 2021)

DIP-denosing This is a code repo for Rethinking Deep Image Prior for Denoising (ICCV 2021). Addressing the relationship between Deep image prior and e

Computer Vision Lab. @ GIST 36 Dec 29, 2022
Hierarchical Clustering: O(1)-Approximation for Well-Clustered Graphs

Hierarchical Clustering: O(1)-Approximation for Well-Clustered Graphs This repository contains code to accompany the paper "Hierarchical Clustering: O

3 Sep 25, 2022
Retinal vessel segmentation based on GT-UNet

Retinal vessel segmentation based on GT-UNet Introduction This project is a retinal blood vessel segmentation code based on UNet-like Group Transforme

Kent0n 27 Dec 18, 2022
SAN for Product Attributes Prediction

SAN Heterogeneous Star Graph Attention Network for Product Attributes Prediction This repository contains the official PyTorch implementation for ADVI

Xuejiao Zhao 9 Dec 12, 2022
Clean and readable code for Decision Transformer: Reinforcement Learning via Sequence Modeling

Minimal implementation of Decision Transformer: Reinforcement Learning via Sequence Modeling in PyTorch for mujoco control tasks in OpenAI gym

Nikhil Barhate 104 Jan 06, 2023
Python Actor concurrency library

Thespian Actor Library This library provides the framework of an Actor model for use by applications implementing Actors. Thespian Site with Documenta

Kevin Quick 177 Dec 11, 2022
Bringing sanity to world of messed-up data

Sanitize sanitize is a Python module for making sure various things (e.g. HTML) are safe to use. It was originally written by Mark Pilgrim and is dist

Alireza Savand 63 Oct 26, 2021
COD-Rank-Localize-and-Segment (CVPR2021)

COD-Rank-Localize-and-Segment (CVPR2021) Simultaneously Localize, Segment and Rank the Camouflaged Objects Full camouflage fixation training dataset i

JingZhang 52 Dec 20, 2022
Evaluating Cross-lingual Sentence Representations

XNLI: The Cross-Lingual NLI Corpus XNLI is an evaluation corpus for language transfer and cross-lingual sentence classification in 15 languages. New:

Meta Research 395 Dec 19, 2022
The repository contains source code and models to use PixelNet architecture used for various pixel-level tasks. More details can be accessed at .

PixelNet: Representation of the pixels, by the pixels, and for the pixels. We explore design principles for general pixel-level prediction problems, f

Aayush Bansal 196 Aug 10, 2022
A clean and robust Pytorch implementation of PPO on continuous action space.

PPO-Continuous-Pytorch I found the current implementation of PPO on continuous action space is whether somewhat complicated or not stable. And this is

XinJingHao 56 Dec 16, 2022
LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection

LiDAR Distillation Paper | Model LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection Yi Wei, Zibu Wei, Yongming Rao, Jiax

Yi Wei 75 Dec 22, 2022
Apache Spark - A unified analytics engine for large-scale data processing

Apache Spark Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Scala, Java, Python, and R, and an op

The Apache Software Foundation 34.7k Jan 04, 2023
Generate Cartoon Images using Generative Adversarial Network

AvatarGAN ✨ Generate Cartoon Images using DC-GAN Deep Convolutional GAN is a generative adversarial network architecture. It uses a couple of guidelin

Aakash Jhawar 50 Dec 29, 2022
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

185 Dec 26, 2022
Fast (simple) spectral synthesis and emission-line fitting of DESI spectra.

FastSpecFit Introduction This repository contains code and documentation to perform fast, simple spectral synthesis and emission-line fitting of DESI

5 Aug 02, 2022
A port of muP to JAX/Haiku

MUP for Haiku This is a (very preliminary) port of Yang and Hu et al.'s μP repo to Haiku and JAX. It's not feature complete, and I'm very open to sugg

18 Dec 30, 2022
Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework via Self-Supervised Multi-Task Learning. Code will be available soon.

Official-PyTorch-Implementation-of-TransMEF Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fu

117 Dec 27, 2022
Bringing Computer Vision and Flutter together , to build an awesome app !!

Bringing Computer Vision and Flutter together , to build an awesome app !! Explore the Directories Flutter · Machine Learning Table of Contents About

Padmanabha Banerjee 14 Apr 07, 2022
Brain tumor detection using CNN (InceptionResNetV2 Model)

Brain-Tumor-Detection Building a detection model using a convolutional neural network in Tensorflow & Keras. Used brain MRI images. InceptionResNetV2

1 Feb 13, 2022