Active and Sample-Efficient Model Evaluation

Overview

Active Testing: Sample-Efficient Model Evaluation

Hi, good to see you here! πŸ‘‹

This is code for "Active Testing: Sample-Efficient Model Evaluation".

Please cite our paper, if you find this helpful:

@article{kossen2021active,
  title={{A}ctive {T}esting: {S}ample-{E}fficient {M}odel {E}valuation},
  author={Kossen, Jannik and Farquhar, Sebastian and Gal, Yarin and Rainforth, Tom},
  journal={arXiv:2103.05331},
  year={2021}
}

animation

Setup

The requirements.txt can be used to set up a python environment for this codebase. You can do this, for example, with conda:

conda create -n isactive python=3.8
conda activate isactive
pip install -r requirements.txt

Reproducing the Experiments

  • To reproduce a figure of the paper, first run the appropriate experiments
sh reproduce/experiments/figure-X.sh
  • And then create the plots with the Jupyter Notebook at
notebooks/plots_paper.ipynb
  • (The notebook let's you conveniently select which plots to recreate.)

  • Which should put plots into notebooks/plots/.

  • In the above, replace X by

    • 123 for Figures 1, 2, 3
    • 4 for Figure 4
    • 5 for Figure 5
    • 6 for Figure 6
    • 7 for Figure 7
  • Other notes

    • Synthetic data experiments do not require GPUs and should run on pretty much all recent hardware.
    • All other plots, realistically speaking, require GPUs.
    • We are also happy to share a 4 GB file with results from all experiments presented in the paper.
    • You may want to produce plots 7 and 8 for other experiment setups than the one in the paper, i.e. ones you already have computed.
    • Some experiments, e.g. those for Figures 4 or 6, may run a really long time on a single GPU. It may be good to
      • execute the scripts in the sh-files in parallel on multiple GPUs.
      • start multiple runs in parallel and then combine experiments. (See below).
      • end the runs early / decrease number of total runs (this can be very reasonable -- look at the config files in conf/paper to modify this property)
    • If you want to understand the code, below we give a good strategy for approaching it. (Also start with synthetic data experiments. They have less complex code!)

Running A Custom Experiment

  • main.py is the main entry point into this code-base.

    • It executes a a total of n_runs active testing experiments for a fixed setup.
    • Each experiment:
      • Trains (or loads) one main model.
      • This model can then be evaluated with a variety of acquisition strategies.
      • Risk estimates are then computed for points/weights from all acquisition strategies for all risk estimators.
  • This repository uses Hydra to manage configs.

    • Look at conf/config.yaml or one of the experiments in conf/... for default configs and hyperparameters.
    • Experiments are autologged and results saved to ./output/.
  • See notebooks/eplore_experiment.ipynb for some example code on how to evaluate custom experiments.

    • The evaluations use activetesting.visualize.Visualiser which implements visualisation methods.
    • Give it a path to an experiment in output/path/to/experiment and explore the methods.
    • If you want to combine data from multiple runs, give it a list of paths.
    • I prefer to load this in Jupyter Notebooks, but hey, everybody's different.
  • A guide to the code

    • main.py runs repeated experiments and orchestrates the whole shebang.
      • It iterates through all n_runs and acquisition strategies.
    • experiment.py handles a single experiment.
      • It combines the model, dataset, acquisition strategy, and risk estimators.
    • datasets.py, aquisition.py, loss.py, risk_estimators.py all contain exactly what you would expect!
    • hoover.py is a logging module.
    • models/ contains all models, scikit-learn and pyTorch.
      • In sk2torch.py we have some code that wraps torch models in a way that lets them be used as scikit-learn models from the outside.

And Finally

Thanks for stopping by!

If you find anything wrong with the code, please contact us.

We are happy to answer any questions related to the code and project.

Owner
Jannik Kossen
PhD Student at OATML Oxford
Jannik Kossen
A repository for the paper "Improved Adversarial Systems for 3D Object Generation and Reconstruction".

Improved Adversarial Systems for 3D Object Generation and Reconstruction: This is a repository for the paper "Improved Adversarial Systems for 3D Obje

Edward Smith 188 Dec 25, 2022
PyTorch implementation for 3D human pose estimation

Towards 3D Human Pose Estimation in the Wild: a Weakly-supervised Approach This repository is the PyTorch implementation for the network presented in:

Xingyi Zhou 579 Dec 22, 2022
Tensorflow implementation of "BEGAN: Boundary Equilibrium Generative Adversarial Networks"

BEGAN in Tensorflow Tensorflow implementation of BEGAN: Boundary Equilibrium Generative Adversarial Networks. Requirements Python 2.7 or 3.x Pillow tq

Taehoon Kim 922 Dec 21, 2022
Pytorch implementation of CoCon: A Self-Supervised Approach for Controlled Text Generation

COCON_ICLR2021 This is our Pytorch implementation of COCON. CoCon: A Self-Supervised Approach for Controlled Text Generation (ICLR 2021) Alvin Chan, Y

alvinchangw 79 Dec 18, 2022
Code to reproduce the results in "Visually Grounded Reasoning across Languages and Cultures", EMNLP 2021.

marvl-code [WIP] This is the implementation of the approaches described in the paper: Fangyu Liu*, Emanuele Bugliarello*, Edoardo M. Ponti, Siva Reddy

25 Nov 15, 2022
Waymo motion prediction challenge 2021: 3rd place solution

Waymo motion prediction challenge 2021: 3rd place solution πŸ“œ Technical report πŸ—¨οΈ Presentation πŸŽ‰ Announcement πŸ›†Motion Prediction Channel Website πŸ›†

158 Jan 08, 2023
(ImageNet pretrained models) The official pytorch implemention of the TPAMI paper "Res2Net: A New Multi-scale Backbone Architecture"

Res2Net The official pytorch implemention of the paper "Res2Net: A New Multi-scale Backbone Architecture" Our paper is accepted by IEEE Transactions o

Res2Net Applications 928 Dec 29, 2022
Code for: https://berkeleyautomation.github.io/bags/

DeformableRavens Code for the paper Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks. Here is the

Daniel Seita 121 Dec 30, 2022
Advanced Signal Processing Notebooks and Tutorials

Advanced Digital Signal Processing Notebooks and Tutorials Prof. Dr. -Ing. Gerald Schuller Jupyter Notebooks and Videos: Renato Profeta Applied Media

Guitars.AI 115 Dec 13, 2022
Compositional Sketch Search

Compositional Sketch Search Official repository for ICIP 2021 Paper: Compositional Sketch Search Requirements Install and activate conda environment c

Alexander Black 8 Sep 06, 2021
Code, final versions, and information on the Sparkfun Graphical Datasheets

Graphical Datasheets Code, final versions, and information on the SparkFun Graphical Datasheets. Generated Cells After Running Script Example Complete

SparkFun Electronics 102 Jan 05, 2023
Using Random Effects to Account for High-Cardinality Categorical Features and Repeated Measures in Deep Neural Networks

LMMNN Using Random Effects to Account for High-Cardinality Categorical Features and Repeated Measures in Deep Neural Networks This is the working dire

Giora Simchoni 10 Nov 02, 2022
Tutorials, assignments, and competitions for MIT Deep Learning related courses.

MIT Deep Learning This repository is a collection of tutorials for MIT Deep Learning courses. More added as courses progress. Tutorial: Deep Learning

Lex Fridman 9.5k Jan 07, 2023
This is a model made out of Neural Network specifically a Convolutional Neural Network model

This is a model made out of Neural Network specifically a Convolutional Neural Network model. This was done with a pre-built dataset from the tensorflow and keras packages. There are other alternativ

9 Oct 18, 2022
TensorFlow-LiveLessons - "Deep Learning with TensorFlow" LiveLessons

TensorFlow-LiveLessons Note that the second edition of this video series is now available here. The second edition contains all of the content from th

Deep Learning Study Group 830 Jan 03, 2023
The repository contains source code and models to use PixelNet architecture used for various pixel-level tasks. More details can be accessed at .

PixelNet: Representation of the pixels, by the pixels, and for the pixels. We explore design principles for general pixel-level prediction problems, f

Aayush Bansal 196 Aug 10, 2022
This repository contains the code for our fast polygonal building extraction from overhead images pipeline.

Polygonal Building Segmentation by Frame Field Learning We add a frame field output to an image segmentation neural network to improve segmentation qu

Nicolas Girard 186 Jan 04, 2023
Pytorch implementation of Learning with Opponent-Learning Awareness

Pytorch implementation of Learning with Opponent-Learning Awareness using DiCE

Alexis David Jacq 82 Sep 15, 2022
Official code for "Mean Shift for Self-Supervised Learning"

MSF Official code for "Mean Shift for Self-Supervised Learning" Requirements Python = 3.7.6 PyTorch = 1.4 torchvision = 0.5.0 faiss-gpu = 1.6.1 In

UMBC Vision 44 Nov 21, 2022
FastFace: Lightweight Face Detection Framework

Light Face Detection using PyTorch Lightning

Γ–mer BORHAN 75 Dec 05, 2022