Code for "Universal inference meets random projections: a scalable test for log-concavity"

Overview

How to use this repository

This repository contains code to replicate the results of "Universal inference meets random projections: a scalable test for log-concavity" by Robin Dunn, Larry Wasserman, and Aaditya Ramdas.

Folder contents

  • batch_scripts: Contains SLURM batch scripts to run the simulations. Scripts are labeled by the figure for which their simulations produce data. These scripts run the code in sim_code, using the parameters in sim_params.
  • data: Output of simulations.
  • plot_code: Reads simulation outputs from data and reproduces all figures in the paper. Plots are saved to plots folder.
  • plots: Contains all plots in paper.
  • sim_code: R code to run simulations. Simulation output is saved to data folder.
  • sim_params: Parameters for simulations. Each row contains a single choice of parameters. The scripts in sim_code read in these files, and the scripts in batch_scripts loop through all choices of parameters.

How do I ...

Produce the simulations for a given figure?

In the batch_scripts folder, scripts are labeled by the figure for which they simulate data. Run all batch scripts corresponding to the figure of interest. The allocated run time is estimated from the choice of parameters for which the code has the longest run time. Many scripts will run faster than this time. The files in sim_code each contain progress bars to estimate the remaining run time. You may wish to start running these files outside of a batch submission to understand the run time on your computing system.

Alternatively, to run the code without using a job submission system, click on any .sh file. The Rscript lines can be run on a terminal, replacing $SLURM_ARRAY_TASK_ID with all of the indices in the batch array.

The simulation output will be stored in the data folder, with one dataset per choice of parameters. To combine these datasets into a single dataset (as they currently appear in data), run the code in sim_code/combine_datasets.R.

Example: batch_scripts/fig01_fully_NP_randproj.sh

This script reproduces the universal test simulations for Figure 1. To do this, it runs the R script at sim_code/fig01_fully_NP_randproj.R. It reads in the parameters from sim_params/fig01_fully_NP_randproj_params.csv. There are 30 sets of parameters in total. The results will be stored in the data folder, with names such as fig01_fully_NP_randproj_1.csv, ..., fig01_fully_NP_randproj_30.csv. To combine these files into a single .csv file, run the code at sim_code/combine_datasets.R.

Examine the code for a given simulation?

The R code in sim_code is labeled by the figures for which they simulate data. Click on all files corresponding to a given figure.

Reproduce a figure without rerunning the simulations?

The R scripts in plot_code are labeled by their corresponding plots. They read in the necessary simulated data from the data folder and output the figures to the plots folder.

Owner
Robin Dunn
Principal Statistical Consultant, Novartis PhD in Statistics, Carnegie Mellon, 2021
Robin Dunn
Checking fibonacci - Generating the Fibonacci sequence is a classic recursive problem

Fibonaaci Series Generating the Fibonacci sequence is a classic recursive proble

Moureen Caroline O 1 Feb 15, 2022
Open source Python implementation of the HDR+ photography pipeline

hdrplus-python Open source Python implementation of the HDR+ photography pipeline, originally developped by Google and presented in a 2016 article. Th

77 Jan 05, 2023
TorchFlare is a simple, beginner-friendly, and easy-to-use PyTorch Framework train your models effortlessly.

TorchFlare TorchFlare is a simple, beginner-friendly and an easy-to-use PyTorch Framework train your models without much effort. It provides an almost

Atharva Phatak 85 Dec 26, 2022
A community run, 5-day PyTorch Deep Learning Bootcamp

Deep Learning Winter School, November 2107. Tel Aviv Deep Learning Bootcamp : http://deep-ml.com. About Tel-Aviv Deep Learning Bootcamp is an intensiv

Shlomo Kashani. 1.3k Sep 04, 2021
Source code for paper "Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling", AAAI 2021

ATLOP Code for AAAI 2021 paper Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling. If you make use of this co

Wenxuan Zhou 146 Nov 29, 2022
External Attention Network

Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks paper : https://arxiv.org/abs/2105.02358 Jittor code will come soon

MenghaoGuo 357 Dec 11, 2022
[ACL 20] Probing Linguistic Features of Sentence-level Representations in Neural Relation Extraction

REval Table of Contents Introduction Overview Requirements Installation Probing Usage Citation License 🎓 Introduction REval is a simple framework for

13 Jan 06, 2023
TalkNet 2: Non-Autoregressive Depth-Wise Separable Convolutional Model for Speech Synthesis with Explicit Pitch and Duration Prediction.

TalkNet 2 [WIP] TalkNet 2: Non-Autoregressive Depth-Wise Separable Convolutional Model for Speech Synthesis with Explicit Pitch and Duration Predictio

Rishikesh (ऋषिकेश) 69 Dec 17, 2022
Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable.

Diffrax Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable. Diffrax is a JAX-based library providing numerical differe

Patrick Kidger 717 Jan 09, 2023
A simple root calculater for python

Root A simple root calculater Usage/Examples python3 root.py 9 3 4 # Order: number - grid - number of decimals # Output: 2.08

Reza Hosseinzadeh 5 Feb 10, 2022
Using VideoBERT to tackle video prediction

VideoBERT This repo reproduces the results of VideoBERT (https://arxiv.org/pdf/1904.01766.pdf). Inspiration was taken from https://github.com/MDSKUL/M

75 Dec 14, 2022
3D Generative Adversarial Network

Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling This repository contains pre-trained models and sampling

Chengkai Zhang 791 Dec 20, 2022
tf2-keras implement yolov5

YOLOv5 in tesnorflow2.x-keras yolov5数据增强jupyter示例 Bilibili视频讲解地址: 《yolov5 解读,训练,复现》 Bilibili视频讲解PPT文件: yolov5_bilibili_talk_ppt.pdf Bilibili视频讲解PPT文件:

yangcheng 254 Jan 08, 2023
Miscellaneous and lightweight network tools

Network Tools Collection of miscellaneous and lightweight network tools to simplify daily operations, administration, and troubleshooting of networks.

Nicholas Russo 22 Mar 22, 2022
Using CNN to mimic the driver based on training data from Torcs

Behavioural-Cloning-in-autonomous-driving Using CNN to mimic the driver based on training data from Torcs. Approach First, the data was collected from

Sudharshan 2 Jan 05, 2022
Irrigation controller for Home Assistant

Irrigation Unlimited This integration is for irrigation systems large and small. It can offer some complex arrangements without large and messy script

Robert Cook 176 Jan 02, 2023
An example to implement a new backbone with OpenMMLab framework.

Backbone example on OpenMMLab framework English | 简体中文 Introduction This is an template repo about how to use OpenMMLab framework to develop a new bac

Ma Zerun 22 Dec 29, 2022
An efficient implementation of GPNN

Efficient-GPNN An efficient implementation of GPNN as depicted in "Drop the GAN: In Defense of Patches Nearest Neighbors as Single Image Generative Mo

7 Apr 16, 2022
Weakly Supervised Learning of Instance Segmentation with Inter-pixel Relations, CVPR 2019 (Oral)

Weakly Supervised Learning of Instance Segmentation with Inter-pixel Relations The code of: Weakly Supervised Learning of Instance Segmentation with I

Jiwoon Ahn 472 Dec 29, 2022
Optimized primitives for collective multi-GPU communication

NCCL Optimized primitives for inter-GPU communication. Introduction NCCL (pronounced "Nickel") is a stand-alone library of standard communication rout

NVIDIA Corporation 2k Jan 09, 2023