Implicit Deep Adaptive Design (iDAD)

Related tags

Deep Learningidad
Overview

Implicit Deep Adaptive Design (iDAD)

This code supports the NeurIPS paper 'Implicit Deep Adaptive Design: Policy-Based Experimental Design without Likelihoods'.

@article{ivanova2021implicit,
  title={Implicit Deep Adaptive Design: Policy-Based Experimental Design without Likelihoods},
  author={Ivanova, Desi R. and Foster, Adam and Kleinegesse, Steven and Gutmann, Michael and Rainforth, Tom},
  journal={Advances in Neural Information Processing Systems (NeurIPS)},
  year={2021}
}

Computing infrastructure requirements

We have tested this codebase on Linux (Ubuntu x86_64) and MacOS (Big Sur v11.2.3) with Python 3.8. To train iDAD networks, we recommend the use of a GPU. We used one GeForce RTX 3090 GPU on a machine with 126 GiB of CPU memory and 40 CPU cores.

Installation

  1. Ensure that Python and conda are installed.
  2. Create and activate a new conda virtual environment as follows
conda create -n idad_code
conda activate idad_code
  1. Install the correct version of PyTorch, following the instructions at pytorch.org. For our experiments we used torch==1.8.0 with CUDA version 11.1.
  2. Install the remaining package requirements using pip install -r requirements.txt.
  3. Install the torchsde package from its repository: pip install git+https://github.com/google-research/torchsde.git.

MLFlow

We use mlflow to log metric and store network parameters. Each experiment run is stored in a directory mlruns which will be created automatically. Each experiment is assigned a numerical and each run gets a unique . The iDAD networks will be saved in ./mlruns/ / /artifacts , which will be printed at the end of each training run.

Location Finding Experiment

To train an iDAD network with the InfoNCE bound to locate 2 sources in 2D, using the approach in the paper, execute the command

python3 location_finding.py \
    --num-steps 100000 \
    --num-experiments=10 \
    --physical-dim 2 \
    --num-sources 2 \
    --lr 0.0005 \
    --num-experiments 10 \
    --encoding-dim 64 \
    --hidden-dim 512 \
    --mi-estimator InfoNCE \
    --device <DEVICE>

To train an iDAD network with the NWJ bound, using the approach in the paper, execute the command

python3 location_finding.py \
    --num-steps 100000 \
    --num-experiments=10 \
    --physical-dim 2 \
    --num-sources 2 \
    --lr 0.0005 \
    --num-experiments 10 \
    --encoding-dim 64 \
    --hidden-dim 512 \
    --mi-estimator NWJ \
    --device <DEVICE>

To run the static MINEBED baseline, use the following

python3 location_finding.py \
    --num-steps 100000 \
    --physical-dim 2 \
    --num-sources 2 \
    --lr 0.0001 \
    --num-experiments 10 \
    --encoding-dim 8 \
    --hidden-dim 512 \
    --design-arch static \
    --critic-arch cat \
    --mi-estimator NWJ \
    --device <DEVICE>

To run the static SG-BOED baseline, use the following

python3 location_finding.py \
    --num-steps 100000 \
    --physical-dim 2 \
    --num-sources 2 \
    --lr 0.0005 \
    --num-experiments 10 \
    --encoding-dim 8 \
    --hidden-dim 512 \
    --design-arch static \
    --critic-arch cat \
    --mi-estimator InfoNCE \
    --device <DEVICE>

To run the adaptive (explicit likelihood) DAD baseline, use the following

python3 location_finding.py \
    --num-steps 100000 \
    --physical-dim 2 \
    --num-sources 2 \
    --lr 0.0005 \
    --num-experiments 10 \
    --encoding-dim 32 \
    --hidden-dim 512 \
    --mi-estimator sPCE \
    --design-arch sum \
    --device <DEVICE>

To evaluate the resulting networks eun the following command

python3 eval_sPCE.py --experiment-id <ID>

To evaluate a random design baseline (requires no pre-training):

python3 baselines_locfin_nontrainable.py \
    --policy random \
    --physical-dim 2 \
    --num-experiments-to-perform 5 10 \
    --device <DEVICE>

To run the variational baseline (note: it takes a very long time), run:

python3 baselines_locfin_variational.py \
    --num-histories 128 \
    --num-experiments 10 \
    --physical-dim 2 \
    --lr 0.001 \
    --num-steps 5000\
    --device <DEVICE>

Copy path_to_artifact and pass it to the evaluation script:

python3 eval_sPCE_from_source.py \
    --path-to-artifact <path_to_artifact> \
    --num-experiments-to-perform 5 10 \
    --device <DEVICE>

Pharmacokinetic Experiment

To train an iDAD network with the InfoNCE bound, using the approach in the paper, execute the command

python3 pharmacokinetic.py \
    --num-steps 100000 \
    --lr 0.0001 \
    --num-experiments 5 \
    --encoding-dim 32 \
    --hidden-dim 512 \
    --mi-estimator InfoNCE \
    --device <DEVICE>

To train an iDAD network with the NWJ bound, using the approach in the paper, execute the command

python3 pharmacokinetic.py \
    --num-steps 100000 \
    --lr 0.0001 \
    --num-experiments 5 \
    --encoding-dim 32 \
    --hidden-dim 512 \
    --mi-estimator NWJ \
    --gamma 0.5 \
    --device <DEVICE>

To run the static MINEBED baseline, use the following

python3 pharmacokinetic.py \
    --num-steps 100000 \
    --lr 0.001 \
    --num-experiments 5 \
    --encoding-dim 8 \
    --hidden-dim 512 \
    --design-arch static \
    --critic-arch cat \
    --mi-estimator NWJ \
    --device <DEVICE>

To run the static SG-BOED baseline, use the following

python3 pharmacokinetic.py \
    --num-steps 100000 \
    --lr 0.0005 \
    --num-experiments 5 \
    --encoding-dim 8 \
    --hidden-dim 512 \
    --design-arch static \
    --critic-arch cat \
    --mi-estimator InfoNCE \
    --device <DEVICE>

To run the adaptive (explicit likelihood) DAD baseline, use the following

python3 pharmacokinetic.py \
    --num-steps 100000 \
    --lr 0.0001 \
    --num-experiments 5 \
    --encoding-dim 32 \
    --hidden-dim 512 \
    --mi-estimator sPCE \
    --design-arch sum \
    --device <DEVICE>

To evaluate the resulting networks run the following command

python3 eval_sPCE.py --experiment-id <ID>

To evaluate a random design baseline (requires no pre-training):

python3 baselines_pharmaco_nontrainable.py \
    --policy random \
    --num-experiments-to-perform 5 10 \
    --device <DEVICE>

To evaluate an equal interval baseline (requires no pre-training):

python3 baselines_pharmaco_nontrainable.py \
    --policy equal_interval \
    --num-experiments-to-perform 5 10 \
    --device <DEVICE>

To run the variational baseline (note: it takes a very long time), run:

python3 baselines_pharmaco_variational.py \
    --num-histories 128 \
    --num-experiments 10 \
    --lr 0.001 \
    --num-steps 5000 \
    --device <DEVICE>

Copy path_to_artifact and pass it to the evaluation script:

python3 eval_sPCE_from_source.py \
    --path-to-artifact <path_to_artifact> \
    --num-experiments-to-perform 5 10 \
    --device <DEVICE>

SIR experiment

For the SIR experiments, please first generate an initial training set and a test set:

python3 epidemic_simulate_data.py \
    --num-samples=100000 \
    --device <DEVICE>

To train an iDAD network with the InfoNCE bound, using the approach in the paper, execute the command

python3 epidemic.py \
    --num-steps 100000 \
    --num-experiments 5 \
    --lr 0.0005 \
    --hidden-dim 512 \
    --encoding-dim 32 \
    --mi-estimator InfoNCE \
    --design-transform ts \
    --device <DEVICE>

To train an iDAD network with the NWJ bound, execute the command

python3 epidemic.py \
    --num-steps 100000 \
    --num-experiments 5 \
    --lr 0.0005 \
    --hidden-dim 512 \
    --encoding-dim 32 \
    --mi-estimator NWJ \
    --design-transform ts \
    --device <DEVICE>

To run the static SG-BOED baseline, run

python3 epidemic.py \
    --num-steps 100000 \
    --num-experiments 5 \
    --lr 0.005 \
    --hidden-dim 512 \
    --encoding-dim 32 \
    --design-arch static \
    --critic-arch cat \
    --design-transform iid \
    --mi-estimator InfoNCE \
    --device <DEVICE>

To run the static MINEBED baseline, run

python3 epidemic.py \
    --num-steps 100000 \
    --num-experiments 5 \
    --lr 0.001 \
    --hidden-dim 512 \
    --encoding-dim 32 \
    --design-arch static \
    --critic-arch cat \
    --design-transform iid \
    --mi-estimator NWJ \
    --device <DEVICE>

To train a critic with random designs (to evaluate the random design baseline):

python3 epidemic.py \
    --num-steps 100000 \
    --num-experiments 5 \
    --lr 0.005 \
    --hidden-dim 512 \
    --encoding-dim 32 \
    --design-arch random \
    --critic-arch cat \
    --design-transform iid \
    --device <DEVICE>

To train a critic with equal interval designs, which is then used to evaluate the equal interval baseline, run the following

python3 epidemic.py \
    --num-steps 100000 \
    --num-experiments 5 \
    --lr 0.001 \
    --hidden-dim 512 \
    --encoding-dim 32 \
    --design-arch equal_interval \
    --critic-arch cat \
    --design-transform iid \
    --device <DEVICE>

Finally, to evaluate the different methods, run

python3 eval_epidemic.py \
    --experiment-id <ID> \
    --device <DEVICE>
Owner
Desi
Desi
Magic tool for managing internet connection in local network by @zalexdev

Megacut ✂️ A new powerful Python3 tool for managing internet on a local network Installation git clone https://github.com/stryker-project/megacut cd m

Stryker 12 Dec 15, 2022
Deep Sea Treasure Environment for Multi-Objective Optimization Research

DeepSeaTreasure Environment Installation In order to get started with this environment, you can install it using the following command: python3 -m pip

imec IDLab 6 Nov 14, 2022
Code for IntraQ, PyTorch implementation of our paper under review

IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network Quantization paper Requirements Python = 3.7.10 Pytorch == 1.7

1 Nov 19, 2021
Awesome-google-colab - Google Colaboratory Notebooks and Repositories

Unofficial Google Colaboratory Notebook and Repository Gallery Please contact me to take over and revamp this repo (it gets around 30k views and 200k

Derek Snow 1.2k Jan 03, 2023
Deep Learning Based Fasion Recommendation System for Ecommerce

Project Name: Fasion Recommendation System for Ecommerce A Deep learning based streamlit web app which can recommened you various types of fasion prod

BAPPY AHMED 13 Dec 13, 2022
Pytorch0.4.1 codes for InsightFace

InsightFace_Pytorch Pytorch0.4.1 codes for InsightFace 1. Intro This repo is a reimplementation of Arcface(paper), or Insightface(github) For models,

1.5k Jan 01, 2023
Open source implementation of "A Self-Supervised Descriptor for Image Copy Detection" (SSCD).

A Self-Supervised Descriptor for Image Copy Detection (SSCD) This is the open-source codebase for "A Self-Supervised Descriptor for Image Copy Detecti

Meta Research 68 Jan 04, 2023
BiSeNet based on pytorch

BiSeNet BiSeNet based on pytorch 0.4.1 and python 3.6 Dataset Download CamVid dataset from Google Drive or Baidu Yun(6xw4). Pretrained model Download

367 Dec 26, 2022
An educational resource to help anyone learn deep reinforcement learning.

Status: Maintenance (expect bug fixes and minor updates) Welcome to Spinning Up in Deep RL! This is an educational resource produced by OpenAI that ma

OpenAI 7.6k Jan 09, 2023
Official Repository for the paper "Improving Baselines in the Wild".

iWildCam and FMoW baselines (WILDS) This repository was originally forked from the official repository of WILDS datasets (commit 7e103ed) For general

Kazuki Irie 3 Nov 24, 2022
Implementation of "Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner"

Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner This repository is the official implementation of Meta-rPPG: Remote Heart Ra

Eugene Lee 137 Dec 13, 2022
Hand-distance-measurement-game - Hand Distance Measurement Game

Hand Distance Measurement Game This is program is made to calculate the distance

Priyansh 2 Jan 12, 2022
Repository containing the PhD Thesis "Formal Verification of Deep Reinforcement Learning Agents"

Getting Started This repository contains the code used for the following publications: Probabilistic Guarantees for Safe Deep Reinforcement Learning (

Edoardo Bacci 5 Aug 31, 2022
Dynamic Realtime Animation Control

Our project is targeted at making an application that dynamically detects the user’s expressions and gestures and projects it onto an animation software which then renders a 2D/3D animation realtime

Harsh Avinash 10 Aug 01, 2022
This is an example of object detection on Micro bacterium tuberculosis using Mask-RCNN

Mask-RCNN on Mycobacterium tuberculosis This is an example of object detection on Mycobacterium Tuberculosis using Mask RCNN. Implement of Mask R-CNN

Jun-En Ding 1 Sep 16, 2021
Basics of 2D and 3D Human Pose Estimation.

Human Pose Estimation 101 If you want a slightly more rigorous tutorial and understand the basics of Human Pose Estimation and how the field has evolv

Sudharshan Chandra Babu 293 Dec 14, 2022
codebase for "A Theory of the Inductive Bias and Generalization of Kernel Regression and Wide Neural Networks"

Eigenlearning This repo contains code for replicating the experiments of the paper A Theory of the Inductive Bias and Generalization of Kernel Regress

Jamie Simon 45 Dec 02, 2022
T-LOAM: Truncated Least Squares Lidar-only Odometry and Mapping in Real-Time

T-LOAM: Truncated Least Squares Lidar-only Odometry and Mapping in Real-Time The first Lidar-only odometry framework with high performance based on tr

Pengwei Zhou 183 Dec 01, 2022
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano https:

9.6k Dec 31, 2022
Differential fuzzing for the masses!

NEZHA NEZHA is an efficient and domain-independent differential fuzzer developed at Columbia University. NEZHA exploits the behavioral asymmetries bet

147 Dec 05, 2022