Companion code for "Bayesian logistic regression for online recalibration and revision of risk prediction models with performance guarantees"

Overview

Companion code for "Bayesian logistic regression for online recalibration and revision of risk prediction models with performance guarantees"

Installation

We use pip to install things into a python virtual environment. Refer to requirements.txt for package requirements. We use nestly + SCons to run simulations.

File descriptions

generate_data_single_pop.py -- Simulate a data stream from a single population following a logistic regression model.

  • Inputs:
    • --simulation: string for selecting the type of distribution shift. Options for this argument are the keys in SIM_SETTINGS in constants.py.
  • Outputs:
    • --out-file: pickle file containing the data stream

generate_data_two_pop.py -- Simulate a data stream from two subpopulations, where each are generated using logistic regression models. Similar arguments as generate_data_single_pop.py. The percentage split beween the two subpopulations is controlled by the --subpopulations argument.

  • Outputs:
    • --out-file: pickle file containing the data stream

create_modeler.py -- Creates a model developer who fits the original prediction model and may propose a continually refitted model at each time point.

  • Inputs:
    • --data-file: pickle file with the entire data stream
    • --simulation: string for selecting the model refitting strategy by the model developer. Options are to keep the model locked (locked), refit on all accumulated data (cumulative_refit), and refit on the latest observations within some window length (boxed, window length specified by --max-box). The last two options is to train an ensemble with the original and the cumulative_refit models (combo_refit) and train an ensemble with the original and the boxed models (combo_boxed).
  • Outputs:
    • --out-file: pickle file containing the modeler

main.py -- Given the data and the model developer, run online model recalibration/revision using MarBLR and BLR.

  • Inputs:
    • --data-file: pickle file with the entire data stream
    • --model-file: pickle file with the model developer
    • --type-i-regret-factor: Type I regret will be controlled at the rate of args.type_i_regret_factor * (Initial loss of the original model)
    • --reference-recalibs: comma-separated string to select which other online model revisers to run. Options are no updating at all locked, ADAM adam, cumulative logistic regression cumulativeLR.
  • Outputs:
    • --obs-scores-file: csv file containing predicted probabilities and observed outcomes on the data stream
    • --history-file: csv file containing the predicted and actual probabilities on a held-out test data stream (only available if the data stream was simulated)
    • --scores-file: csv file containing performance measures on a held-out test data stream (only available if the data stream was simulated)
    • --recalibrators-file: pickle file containing the history of the online model revisers

Reproducing simulation results

The simulation_recalib folder contains the first set of simulations for online model recalibration. The simulation_revise folder contains the second set of simulations where we perform online logistic revision. The simulation_revise folder contains the third set of simulations where we perform online ensembling of the original model with a continually refitted model. The copd_analysis folder contains code for online model recalibration and revision for the COPD dataset. To reproduce the simulations, run scons .

Autoencoders pretraining using clustering

Autoencoders pretraining using clustering

IITiS PAN 2 Dec 16, 2021
g2o: A General Framework for Graph Optimization

g2o - General Graph Optimization Linux: Windows: g2o is an open-source C++ framework for optimizing graph-based nonlinear error functions. g2o has bee

Rainer Kümmerle 2.5k Dec 30, 2022
SlotRefine: A Fast Non-Autoregressive Model forJoint Intent Detection and Slot Filling

SlotRefine: A Fast Non-Autoregressive Model for Joint Intent Detection and Slot Filling Reference Main paper to be cited (Di Wu et al., 2020) @article

Moore 34 Nov 03, 2022
A Protein-RNA Interface Predictor Based on Semantics of Sequences

PRIP PRIP:A Protein-RNA Interface Predictor Based on Semantics of Sequences installation gensim==3.8.3 matplotlib==3.1.3 xgboost==1.3.3 prettytable==2

李优 0 Mar 25, 2022
A library that can print Python objects in human readable format

objprint A library that can print Python objects in human readable format Install pip install objprint Usage op Use op() (or objprint()) to print obj

319 Dec 25, 2022
[TIP 2020] Multi-Temporal Scene Classification and Scene Change Detection with Correlation based Fusion

Multi-Temporal Scene Classification and Scene Change Detection with Correlation based Fusion Code for Multi-Temporal Scene Classification and Scene Ch

Lixiang Ru 33 Dec 12, 2022
Mixup for Supervision, Semi- and Self-Supervision Learning Toolbox and Benchmark

OpenSelfSup News Downstream tasks now support more methods(Mask RCNN-FPN, RetinaNet, Keypoints RCNN) and more datasets(Cityscapes). 'GaussianBlur' is

AI Lab, Westlake University 332 Jan 03, 2023
PyTorch original implementation of Cross-lingual Language Model Pretraining.

XLM NEW: Added XLM-R model. PyTorch original implementation of Cross-lingual Language Model Pretraining. Includes: Monolingual language model pretrain

Facebook Research 2.7k Dec 27, 2022
Aquarius - Enabling Fast, Scalable, Data-Driven Virtual Network Functions

Aquarius Aquarius - Enabling Fast, Scalable, Data-Driven Virtual Network Functions NOTE: We are currently going through the open-source process requir

Zhiyuan YAO 0 Jun 02, 2022
Encode and decode text application

Text Encoder and Decoder Encode and decode text in many ways using this application! Encode in: ASCII85 Base85 Base64 Base32 Base16 Url MD5 Hash SHA-1

Alice 1 Feb 12, 2022
A modular PyTorch library for optical flow estimation using neural networks

A modular PyTorch library for optical flow estimation using neural networks

neu-vig 113 Dec 20, 2022
Simple tutorials on Pytorch DDP training

pytorch-distributed-training Distribute Dataparallel (DDP) Training on Pytorch Features Easy to study DDP training You can directly copy this code for

Ren Tianhe 188 Jan 06, 2023
DeepLearning Anomalies Detection with Bluetooth Sensor Data

Final Year Project. Constructing models to create offline anomalies detection using Travel Time Data collected from Bluetooth sensors along the route.

1 Jan 10, 2022
IA for recognising Traffic Signs using Keras [Tensorflow]

Traffic Signs Recognition ⚠️ 🚦 Fundamentals of Intelligent Systems Introduction 📄 Development of a neural network capable of recognizing nine differ

Sebastián Fernández García 2 Dec 19, 2022
The VeriNet toolkit for verification of neural networks

VeriNet The VeriNet toolkit is a state-of-the-art sound and complete symbolic interval propagation based toolkit for verification of neural networks.

9 Dec 21, 2022
Python codes for Lite Audio-Visual Speech Enhancement.

Lite Audio-Visual Speech Enhancement (Interspeech 2020) Introduction This is the PyTorch implementation of Lite Audio-Visual Speech Enhancement (LAVSE

Shang-Yi Chuang 85 Dec 01, 2022
Simple and ready-to-use tutorials for TensorFlow

TensorFlow World To support maintaining and upgrading this project, please kindly consider Sponsoring the project developer. Any level of support is a

Amirsina Torfi 4.5k Dec 23, 2022
An implementation of "MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing" (ICML 2019).

MixHop and N-GCN ⠀ A PyTorch implementation of "MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing" (ICML 2019)

Benedek Rozemberczki 393 Dec 13, 2022
The author's officially unofficial PyTorch BigGAN implementation.

BigGAN-PyTorch The author's officially unofficial PyTorch BigGAN implementation. This repo contains code for 4-8 GPU training of BigGANs from Large Sc

Andy Brock 2.6k Jan 02, 2023
Official implementation for “Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior”

HEP Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior Implementation Python3 PyTorch=1.0 NVIDIA GPU+CUDA Training process The

FengZhang 34 Dec 04, 2022