Skip to content

StanfordHCI/homeview-ml

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dataset Preparation

Step 1. Place the generated file in directory vh.[name]/raw/. Make sure the frame count exceeds [n_train] set in config.py. Each frame should contain one frame_id.json and [n_cameras] of frame_id-camera_id-point_cloud.exr and [n_cameras] of frame_id-camera_id-rgb.png.

Step 2. Convert to dataset. The first [n_train] frames are used for training and the rest for evaluation.

python vh.py [name]

train.pth and eval.pth will be generated and saved in vh.[name]/.

Train

python train.py [name]

Test

Specify the frame id [eval_id] for evaluation.

python test.py [name] [eval_id]

Demo-Backend

1. install requirements

pip install flask flask-compress

2. prepare chunks

prepare locally

python localize.py [name]

or download the chunks here, then extract it to vh.[name]/chunks.

3. run backend

python app.py [name]

About

Baseline model for Augmented Home Assistant

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages