Repo for "Physion: Evaluating Physical Prediction from Vision in Humans and Machines" submission to NeurIPS 2021 (Datasets & Benchmarks track)

Overview

Physion: Evaluating Physical Prediction from Vision in Humans and Machines

Animation of the 8 scenarios

This repo contains code and data to reproduce the results in our paper, Physion: Evaluating Physical Prediction from Vision in Humans and Machines. Please see below for details about how to download the Physion dataset, replicate our modeling & human experiments, and statistical analyses to reproduce our results.

  1. Downloading the Physion dataset
  2. Dataset generation
  3. Modeling experiments
  4. Human experiments
  5. Comparing models and humans

Downloading the Physion dataset

Downloading the Physion test set (a.k.a. stimuli)

PhysionTest-Core (270 MB)

PhysionTest-Core is all you need to evaluate humans and models on exactly the same test stimuli used in our paper.

It contains eight directories, one for each scenario type (e.g., collide, contain, dominoes, drape, drop, link, roll, support).

Each of these directories contains three subdirectories:

  • maps: Contains PNG segmentation maps for each test stimulus, indicating location of agent object in red and patient object in yellow.
  • mp4s: Contains the MP4 video files presented to human participants. The agent and patient objects appear in random colors.
  • mp4s-redyellow: Contains the MP4 video files passed into models. The agent and patient objects consistently appear in red and yellow, respectively.

Download URL: https://physics-benchmarking-neurips2021-dataset.s3.amazonaws.com/Physion.zip.

PhysionTest-Complete (380 GB)

PhysionTest-Complete is what you want if you need more detailed metadata for each test stimulus.

Each stimulus is encoded in an HDF5 file containing comprehensive information regarding depth, surface normals, optical flow, and segmentation maps associated with each frame of each trial, as well as other information about the physical states of objects at each time step.

Download URL: https://physics-benchmarking-neurips2021-dataset.s3.amazonaws.com/PhysionTestHDF5.tar.gz.

You can also download the testing data for individual scenarios from the table in the next section.

Downloading the Physion training set

Downloading PhysionTrain-Dynamics

PhysionTrain-Dynamics contains the full dataset used to train the dynamics module of models benchmarked in our paper. It consists of approximately 2K stimuli per scenario type.

Download URL (770 MB): https://physics-benchmarking-neurips2021-dataset.s3.amazonaws.com/PhysionTrainMP4s.tar.gz

Downloading PhysionTrain-Readout

PhysionTrain-Readout contains a separate dataset used for training the object-contact prediction (OCP) module for models pretrained on the PhysionTrain-Dynamics dataset. It consists of 1K stimuli per scenario type.

The agent and patient objects in each of these readout stimuli consistently appear in red and yellow, respectively (as in the mp4s-redyellow examples from PhysionTest-Core above).

NB: Code for using these readout sets to benchmark any pretrained model (not just models trained on the Physion training sets) will be released prior to publication.

Download URLs for complete PhysionTrain-Dynamics and PhysionTrain-Readout:

Scenario Dynamics Training Set Readout Training Set Test Set
Dominoes Dominoes_dynamics_training_HDF5s Dominoes_readout_training_HDF5s Dominoes_testing_HDF5s
Support Support_dynamics_training_HDF5s Support_readout_training_HDF5s Support_testing_HDF5s
Collide Collide_dynamics_training_HDF5s Collide_readout_training_HDF5s Collide_testing_HDF5s
Contain Contain_dynamics_training_HDF5s Contain_readout_training_HDF5s Contain_testing_HDF5s
Drop Drop_dynamics_training_HDF5s Drop_readout_training_HDF5s Drop_testing_HDF5s
Roll Roll_dynamics_training_HDF5s Roll_readout_training_HDF5s Roll_testing_HDF5s
Link Link_dynamics_training_HDF5s Link_readout_training_HDF5s Link_testing_HDF5s
Drape Drape_dynamics_training_HDF5s Drape_readout_training_HDF5s Drape_testing_HDF5s

Dataset generation

This repo depends on outputs from tdw_physics.

Specifically, tdw_physics is used to generate the dataset of physical scenarios (a.k.a. stimuli), including both the training datasets used to train physical-prediction models, as well as test datasets used to measure prediction accuracy in both physical-prediction models and human participants.

Instructions for using the ThreeDWorld simulator to regenerate datasets used in our work can be found here. Links for downloading the Physion testing, training, and readout fitting datasets can be found here.

Modeling experiments

The modeling component of this repo depends on the physopt repo. The physopt repo implements an interface through which a wide variety of physics prediction models from the literature (be they neural networks or otherwise) can be adapted to accept the inputs provided by our training and testing datasets and produce outputs for comparison with our human measurements.

The physopt also contains code for model training and evaluation. Specifically, physopt implements three train/test procols:

  • The only protocol, in which each candidate physics model architecture is trained -- using that model's native loss function as specified by the model's authors -- separately on each of the scenarios listed above (e.g. "dominoes", "support", &c). This produces eight separately-trained models per candidate architecture (one for each scenario). Each of these separate models are then tested in comparison to humans on the testing data for that scenario.
  • A all protocol, in which each candidate physics architecture is trained on mixed data from all of the scenarios simultaneously (again, using that model's native loss function). This single model is then tested and compared to humans separately on each scenario.
  • A all-but-one protocol, in which each candidate physics architecture is trained on mixed data drawn for all but one scenario -- separately for all possible choices of the held-out scenario. This produces eight separately-trained models per candidate architecture (one for each held-out scenario). Each of these separate models are then tested in comparison to humans on the testing data for that scenario.

Results from each of the three protocols are separately compared to humans (as described below in the section on comparison of humans to models). All model-human comparisons are carried using a representation-learning paradigm, in which models are trained on their native loss functions (as encoded by the original authors of the model). Trained models are then evaluated on the specific physion red-object-contacts-yellow-zone prediction task. This evaluation is carried by further training a "readout", implemented as a linear logistic regression. Readouts are always trained in a per-scenario fashion.

Currently, physopt implements the following specific physics prediction models:

Model Name Our Code Link Original Paper Description
SVG Denton and Fergus 2018 Image-like latent
OP3 Veerapaneni et. al. 2020
CSWM Kipf et. al. 2020
RPIN Qi et. al. 2021
pVGG-mlp
pVGG-lstm
pDEIT-mlp Touvron et. al. 2020
pDEIT-lstm
GNS Sanchez-Gonzalez et. al. 2020
GNS-R
DPI Li et. al. 2019

Human experiments

This repo contains code to conduct the human behavioral experiments reported in this paper, as well as analyze the resulting data from both human and modeling experiments.

The details of the experimental design and analysis plan are documented in our study preregistration contained within this repository. The format for this preregistration is adapted from the templates provided by the Open Science Framework for our studies, and put under the same type of version control as the rest of the codebase for this project.

Here is what each main directory in this repo contains:

  • experiments: This directory contains code to run the online human behavioral experiments reported in this paper. More detailed documentation of this code can be found in the README file nested within the experiments subdirectory.
  • analysis (aka notebooks): This directory contains our analysis jupyter/Rmd notebooks. This repo assumes you have also imported model evaluation results from physopt.
  • results: This directory contains "intermediate" results of modeling/human experiments. It contains three subdirectories: csv, plots, and summary.
    • /results/csv/ contains csv files containing tidy dataframes with "raw" data.
    • /results/plots/ contains .pdf/.png plots, a selection of which are then polished and formatted for inclusion in the paper using Adobe Illustrator.
    • Important: Before pushing any csv files containing human behavioral data to a public code repository, triple check that this data is properly anonymized. This means no bare AMT Worker ID's or Prolific participant IDs.
  • stimuli: This directory contains any download/preprocessing scripts for data (a.k.a. stimuli) that are the inputs to human behavioral experiments. This repo assumes you have generated stimuli using tdw_physics. This repo uses code in this directory to upload stimuli to AWS S3 and generate metadata to control the timeline of stimulus presentation in the human behavioral experiments.
  • utils: This directory is meant to contain any files containing general helper functions.

Comparing models and humans

The results reported in this paper can be reproduced by running the Jupyter notebooks contained in the analysis directory.

  1. Downloading results. To download the "raw" human and model prediction behavior, please navigate to the analysis directory and execute the following command at the command line: python download_results.py. This script will fetch several CSV files and download them to subdirectories within results/csv. If this does not work, please download this zipped folder (csv) and move it to the results directory: https://physics-benchmarking-neurips2021-dataset.s3.amazonaws.com/model_human_results.zip.
  2. Reproducing analyses. To reproduce the key analyses reported in the paper, please run the following notebooks in this sequence:
    • summarize_human_model_behavior.ipynb: The purpose of this notebook is to:
      • Apply preprocessing to human behavioral data
      • Visualize distribution and compute summary statistics over human physical judgments
      • Visualize distribution and compute summary statistics over model physical judgments
      • Conduct human-model comparisons
      • Output summary CSVs that can be used for further statistical modeling & create publication-quality visualizations
    • inference_human_model_behavior.ipynb: The purpose of this notebook is to:
      • Visualize human and model prediction accuracy (proportion correct)
      • Visualize average-human and model agreement (RMSE)
      • Visualize human-human and model-human agreement (Cohen's kappa)
      • Compare performance between models
    • paper_plots.ipynb: The purpose of this notebook is to create publication-quality figures for inclusion in the paper.
Owner
Cognitive Tools Lab
reverse engineering the human cognitive toolkit
Cognitive Tools Lab
NeurIPS'21 Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows

NeurIPS'21 Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows This repo contains the code for the paper Tractable Densit

Layer6 Labs 4 Dec 12, 2022
Offcial repository for the IEEE ICRA 2021 paper Auto-Tuned Sim-to-Real Transfer.

Offcial repository for the IEEE ICRA 2021 paper Auto-Tuned Sim-to-Real Transfer.

47 Jun 30, 2022
Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather

LiDAR fog simulation Created by Martin Hahner at the Computer Vision Lab of ETH Zurich. This is the official code release of the paper Fog Simulation

Martin Hahner 110 Dec 30, 2022
In this project, we create and implement a deep learning library from scratch.

ARA In this project, we create and implement a deep learning library from scratch. Table of Contents Deep Leaning Library Table of Contents About The

22 Aug 23, 2022
시각 장애인을 위한 스마트 지팡이에 활용될 딥러닝 모델 (DL Model Repo)

SmartCane-DL-Model Smart Cane using semantic segmentation 참고한 Github repositoy 🔗 https://github.com/JunHyeok96/Road-Segmentation.git 데이터셋 🔗 https://

반드시 졸업한다 (Team Just Graduate) 4 Dec 03, 2021
Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

The official code for the NeurIPS 2021 paper Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

13 Dec 22, 2022
Code for "Causal autoregressive flows" - AISTATS, 2021

Code for "Causal Autoregressive Flow" This repository contains code to run and reproduce experiments presented in Causal Autoregressive Flows, present

Ricardo Pio Monti 35 Dec 16, 2022
Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR

UniSpeech The family of UniSpeech: UniSpeech (ICML 2021): Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR UniSpeech-

Microsoft 282 Jan 09, 2023
Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving

GSAN Introduction Code for paper GSAN: Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving, wh

YE Luyao 6 Oct 27, 2022
Source code and notebooks to reproduce experiments and benchmarks on Bias Faces in the Wild (BFW).

Face Recognition: Too Bias, or Not Too Bias? Robinson, Joseph P., Gennady Livitz, Yann Henon, Can Qin, Yun Fu, and Samson Timoner. "Face recognition:

Joseph P. Robinson 41 Dec 12, 2022
Self-supervised Augmentation Consistency for Adapting Semantic Segmentation (CVPR 2021)

Self-supervised Augmentation Consistency for Adapting Semantic Segmentation This repository contains the official implementation of our paper: Self-su

Visual Inference Lab @TU Darmstadt 132 Dec 21, 2022
An implementation of "Optimal Textures: Fast and Robust Texture Synthesis and Style Transfer through Optimal Transport"

Optex An implementation of Optimal Textures: Fast and Robust Texture Synthesis and Style Transfer through Optimal Transport for TU Delft CS4240. You c

Hans Brouwer 33 Jan 05, 2023
Keyword2Text This repository contains the code of the paper: "A Plug-and-Play Method for Controlled Text Generation"

Keyword2Text This repository contains the code of the paper: "A Plug-and-Play Method for Controlled Text Generation", if you find this useful and use

57 Dec 27, 2022
(under submission) Bayesian Integration of a Generative Prior for Image Restoration

BIGPrior: Towards Decoupling Learned Prior Hallucination and Data Fidelity in Image Restoration Authors: Majed El Helou, and Sabine Süsstrunk {Note: p

Majed El Helou 22 Dec 17, 2022
This program writes christmas wish programmatically. It is using turtle as a pen pointer draw christmas trees and stars.

Introduction This is a simple program is written in python and turtle library. The objective of this program is to wish merry Christmas programmatical

Gunarakulan Gunaretnam 1 Dec 25, 2021
PyTorch reimplementation of minimal-hand (CVPR2020)

Minimal Hand Pytorch Unofficial PyTorch reimplementation of minimal-hand (CVPR2020). you can also find in youtube or bilibili bare hand youtube or bil

Hao Meng 228 Dec 29, 2022
Geometry-Free View Synthesis: Transformers and no 3D Priors

Geometry-Free View Synthesis: Transformers and no 3D Priors Geometry-Free View Synthesis: Transformers and no 3D Priors Robin Rombach*, Patrick Esser*

CompVis Heidelberg 293 Dec 22, 2022
Code for unmixing audio signals in four different stems "drums, bass, vocals, others". The code is adapted from "Jukebox: A Generative Model for Music"

Status: Archive (code is provided as-is, no updates expected) Disclaimer This code is a based on "Jukebox: A Generative Model for Music" Paper We adju

Wadhah Zai El Amri 24 Dec 29, 2022
The Instructed Glacier Model (IGM)

The Instructed Glacier Model (IGM) Overview The Instructed Glacier Model (IGM) simulates the ice dynamics, surface mass balance, and its coupling thro

27 Dec 16, 2022
Official Implementation for "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" (ICCV 2021 Oral)

StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery (ICCV 2021 Oral) Run this model on Replicate Optimization: Global directions: Mapper: Check ou

3.3k Jan 05, 2023