Official implementation of particle-based models (GNS and DPI-Net) on the Physion dataset.

Overview

Physion: Evaluating Physical Prediction from Vision in Humans and Machines [paper]

Daniel M. Bear, Elias Wang, Damian Mrowca, Felix J. Binder, Hsiao-Yu Fish Tung, R.T. Pramod, Cameron Holdaway, Sirui Tao, Kevin Smith, Fan-Yun Sun, Li Fei-Fei, Nancy Kanwisher, Joshua B. Tenenbaum, Daniel L.K. Yamins, Judith E. Fan

This is the official implementation of particle-based models (GNS and DPI-Net) on the Physion dataset. The code is built based on the original implementation of DPI-Net (https://github.com/YunzhuLi/DPI-Net).

Contact: [email protected] (Fish Tung)

Papers of GNS and DPI-Net:

** Learning to Simulate Complex Physics with Graph Networks ** [paper]

Alvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, Peter W. Battaglia

** Learning Particle Dynamics for Manipulating Rigid Bodies, Deformable Objects, and Fluids ** [website] [paper]

Yunzhu Li, Jiajun Wu, Russ Tedrake, Joshua B. Tenenbaum, Antonio Torralba **

Demo

Rollout from our learned model (left is ground truth, right is prediction)

Dominoes Roll Contain Drape

Installation

Clone this repo:

git clone https://github.com/htung0101/DPI-Net-p.git
cd DPI-Net-p
git submodule update --init --recursive

Install Dependencies if using Conda

For Conda users, we provide an installation script:

bash ./scripts/conda_deps.sh
pip install pyyaml

To use tensorboard for training visualization

pip install tensorboardX
pip install tensorboard

Install binvox

We use binvox to transform object mesh into particles. To use binvox, please download binvox from https://www.patrickmin.com/binvox/, put it under ./bin, and include it in your path with

export PATH=$PATH:$PWD/bin.

You might need to do chmod 777 binvox in order to execute the file.

Setup your own data path

open paths.yaml and write your own path there. You can set up different paths for different machines under different user name.

Preprocessing the Physion dataset

1) We need to convert the mesh scenes into particle scenes. This line will generate a separate folder (dpi_data_dir specified in paths.yaml) that holds data for the particle-based models

bash run_preprocessing_tdw_cheap.sh [SCENARIO_NAME] [MODE]

e.g., bash run_preprocessing_tdw_cheap.sh Dominoes train SCENARIO_NAME can be one of the following: Dominoes, Collide, Support, Link, Contain, Roll, Drop, or Drape. MODE can be either train or test

You can visualize the original videos and the generated particle scenes with

python preprocessing_tdw_cheap.py --scenario Dominones --mode "train" --visualization 1

There will be videos generated under the folder vispy.

2) Then, try generate a train.txt and valid.txt files that indicates the trials you want to use for training and validaiton.

python create_train_valid.py

You can also design your specific split. Just put the trial names into one txt file.

3) For evalution on the red-hits-yellow prediciton, we can get the binary red-hits-yellow label txt file from the test dataset with

bash run_get_label_txt.sh [SCENARIO_NAME] test

This will generate a folder called labels under your output_folder dpi_data_dir. In the folder, each scenario will have a corresponding label file called [SCENARIO_NAME].txt

Training

Ok, now we are ready to start training the models.You can use the following command to train from scratch.

  • Train GNS
    bash scripts/train_gns.sh [SCENARIO_NAME] [GPU_ID]

SCENARIO_NAME can be one of the following: Dominoes, Collide, Support, Link, Contain, Roll, Drop and Drape.

  • Train DPI
    bash scripts/train_dpi.sh [SCENARIO_NAME] [GPU_ID]

Our implementation is different from the original DPI paper in 2 ways: (1) our model takes as inputs relative positions as opposed to absolute positions, (2) our model is trained with injected noise. These two features are suggested in the GNS paper, and we found them to be critcial for the models to generalize well to unseen scenes.

  • Train with multiple scenarios

You can also train with more than one scenarios by adding different scenario to the argument dataf

 python train.py  --env TDWdominoes --model_name GNS --log_per_iter 1000 --training_fpt 3 --ckp_per_iter 5000 --floor_cheat 1  --dataf "Dominoes, Collide, Support, Link, Roll, Drop, Contain, Drape" --outf "all_gns"
  • Visualize your training progress

Models and model logs are saved under [out_dir]/dump/dump_TDWdominoes. You can visualize the training progress using tensorboard

tensorboard --logdir MODEL_NAME/log

Evaluation

  • Evaluate GNS
bash scripts/eval_gns.sh [TRAIN_SCENARIO_NAME] [EPOCH] [ITER] [Test SCENARIO_NAME] [GPU_ID]

You can get the prediction txt file under eval/eval_TDWdominoes/[MODEL_NAME], e.g., test-Drape.txt, which contains results of testing the model on the Drape scenario. You can visualize the results with additional argument --vis 1.

  • Evaluate GNS-Ransac
bash scripts/eval_gns_ransac.sh [TRAIN_SCENARIO_NAME] [EPOCH] [ITER] [Test SCENARIO_NAME] [GPU_ID]
  • Evaluate DPI
bash scripts/eval_dpi.sh [TRAIN_SCENARIO_NAME] [EPOCH] [ITER] [Test SCENARIO_NAME] [GPU_ID]
  • Evaluate Models trained on multiple scenario Here we provide some example of evaluating on arbitray models trained on all scenarios.
bash eval_all_gns.sh [EPOCH] [ITER] [Test SCENARIO_NAME] [GPU_ID]
bash eval_all_dpi.sh [EPOCH] [ITER] [Test SCENARIO_NAME] [GPU_ID]
bash eval_all_gns_ransac.sh [EPOCH] [ITER] [Test SCENARIO_NAME] [GPU_ID]
  • Visualize trained Models Here we provide an example of visualizing the rollout results from trained arbitray models.
bash vis_gns.sh [EPOCH] [ITER] [Test SCENARIO_NAME] [GPU_ID]

You can find the visualization under eval/eval_TDWdominoes/[MODEL_NAME]/test-[Scenario]. We should see a gif for the original RGB videos, and another gif for the side-by-side comparison of gt particle scenes and the predicted particle scenes.

Citing Physion

If you find this codebase useful in your research, please consider citing:

@inproceedings{bear2021physion,
    Title={Physion: Evaluating Physical Prediction from Vision in Humans and Machines},
    author= {Daniel M. Bear and
           Elias Wang and
           Damian Mrowca and
           Felix J. Binder and
           Hsiao{-}Yu Fish Tung and
           R. T. Pramod and
           Cameron Holdaway and
           Sirui Tao and
           Kevin A. Smith and
           Fan{-}Yun Sun and
           Li Fei{-}Fei and
           Nancy Kanwisher and
           Joshua B. Tenenbaum and
           Daniel L. K. Yamins and
           Judith E. Fan},
    url = {https://arxiv.org/abs/2106.08261},
    archivePrefix = {arXiv},
    eprint = {2106.08261},
    Year = {2021}
}
Owner
Hsiao-Yu Fish Tung
Postdoc at MIT CoCosci Lab and Stanford NeuroAILab. PhD at CMU MLD
Hsiao-Yu Fish Tung
We present a regularized self-labeling approach to improve the generalization and robustness properties of fine-tuning.

Overview This repository provides the implementation for the paper "Improved Regularization and Robustness for Fine-tuning in Neural Networks", which

NEU-StatsML-Research 21 Sep 08, 2022
An air quality monitoring service with a Raspberry Pi and a SDS011 sensor.

Raspberry Pi Air Quality Monitor A simple air quality monitoring service for the Raspberry Pi. Installation Clone the repository and run the following

rydercalmdown 24 Dec 09, 2022
Code for "Multi-Compound Transformer for Accurate Biomedical Image Segmentation"

News The code of MCTrans has been released. if you are interested in contributing to the standardization of the medical image analysis community, plea

97 Jan 05, 2023
Attention Probe: Vision Transformer Distillation in the Wild

Attention Probe: Vision Transformer Distillation in the Wild Jiahao Wang, Mingdeng Cao, Shuwei Shi, Baoyuan Wu, Yujiu Yang In ICASSP 2022 This code is

Wang jiahao 3 Oct 31, 2022
Fairness Metrics: All you need to know

Fairness Metrics: All you need to know Testing machine learning software for ethical bias has become a pressing current concern. Recent research has p

Anonymous2020 1 Jan 17, 2022
The implementation of ICASSP 2020 paper "Pixel-level self-paced learning for super-resolution"

Pixel-level Self-Paced Learning for Super-Resolution This is an official implementaion of the paper Pixel-level Self-Paced Learning for Super-Resoluti

Elon Lin 41 Dec 15, 2022
A simple, high level, easy-to-use open source Computer Vision library for Python.

ZoomVision : Slicing Aid Detection A simple, high level, easy-to-use open source Computer Vision library for Python. Installation Installing dependenc

Nurettin Sinanoğlu 2 Mar 04, 2022
An AutoML Library made with Optuna and PyTorch Lightning

An AutoML Library made with Optuna and PyTorch Lightning Installation Recommended pip install -U gradsflow From source pip install git+https://github.

GradsFlow 294 Dec 17, 2022
Source code and Dataset creation for the paper "Neural Symbolic Regression That Scales"

NeuralSymbolicRegressionThatScales Pytorch implementation and pretrained models for the paper "Neural Symbolic Regression That Scales", presented at I

35 Nov 25, 2022
Offical implementation for "Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation".

Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation (NeurIPS 2021) by Qiming Hu, Xiaojie Guo. Dependencies P

Qiming Hu 31 Dec 20, 2022
Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》

Child-Tuning Source code for EMNLP 2021 Long paper: Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning. 1. Environ

46 Dec 12, 2022
Miscellaneous and lightweight network tools

Network Tools Collection of miscellaneous and lightweight network tools to simplify daily operations, administration, and troubleshooting of networks.

Nicholas Russo 22 Mar 22, 2022
Official Implementation for the "An Empirical Investigation of 3D Anomaly Detection and Segmentation" paper.

An Empirical Investigation of 3D Anomaly Detection and Segmentation Project | Paper Official PyTorch Implementation for the "An Empirical Investigatio

Eliahu Horwitz 55 Dec 14, 2022
This repository is for the preprint "A generative nonparametric Bayesian model for whole genomes"

BEAR Overview This repository contains code associated with the preprint A generative nonparametric Bayesian model for whole genomes (2021), which pro

Debora Marks Lab 10 Sep 18, 2022
Learning RGB-D Feature Embeddings for Unseen Object Instance Segmentation

Unseen Object Clustering: Learning RGB-D Feature Embeddings for Unseen Object Instance Segmentation Introduction In this work, we propose a new method

NVIDIA Research Projects 132 Dec 13, 2022
Posterior temperature optimized Bayesian models for inverse problems in medical imaging

Posterior temperature optimized Bayesian models for inverse problems in medical imaging Max-Heinrich Laves*, Malte Tölle*, Alexander Schlaefer, Sandy

Artificial Intelligence in Cardiovascular Medicine (AICM) 6 Sep 19, 2022
Use your Philips Hue lights as Racing Flags. Works with Assetto Corsa, Assetto Corsa Competizione and iRacing.

phue-racing-flags Use your Philips Hue lights as Racing Flags. Explore the docs » Report Bug · Request Feature Table of Contents About The Project Bui

50 Sep 03, 2022
Algorithm to texture 3D reconstructions from multi-view stereo images

MVS-Texturing Welcome to our project that textures 3D reconstructions from images. This project focuses on 3D reconstructions generated using structur

Nils Moehrle 766 Jan 04, 2023
Official code for our EMNLP2021 Outstanding Paper MindCraft: Theory of Mind Modeling for Situated Dialogue in Collaborative Tasks

MindCraft Authors: Cristian-Paul Bara*, Sky CH-Wang*, Joyce Chai This is the official code repository for the paper (arXiv link): Cristian-Paul Bara,

Situated Language and Embodied Dialogue (SLED) Research Group 14 Dec 29, 2022
YOLO-v5 기반 단안 카메라의 영상을 활용해 차간 거리를 일정하게 유지하며 주행하는 Adaptive Cruise Control 기능 구현

자율 주행차의 영상 기반 차간거리 유지 개발 Table of Contents 프로젝트 소개 주요 기능 시스템 구조 디렉토리 구조 결과 실행 방법 참조 팀원 프로젝트 소개 YOLO-v5 기반으로 단안 카메라의 영상을 활용해 차간 거리를 일정하게 유지하며 주행하는 Adap

14 Jun 29, 2022