UNICORN
🦄
Webpage | Paper | BibTex
PyTorch implementation of "Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency" paper, check out our webpage for details!
If you find this code useful, don't forget to star the repo
@article{monnier2022unicorn,
title={{Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance
Consistency}},
author={Monnier, Tom and Fisher, Matthew and Efros, Alexei A and Aubry, Mathieu},
journal={arXiv:2204.10310 [cs]},
year={2022},
}
Installation
👷
1. Create conda environment
🔧
conda env create -f environment.yml
conda activate unicorn
Optional: some monitoring routines are implemented, you can use them by specifying your visdom port in the config file. You will need to install visdom from source beforehand
git clone https://github.com/facebookresearch/visdom
cd visdom && pip install -e .
2. Download datasets
⬇️
bash scripts/download_data.sh
This command will download one of the following datasets:
ShapeNet NMR: paper / NMR paper / dataset (33Go, thanks to the DVR team for hosting the data)CUB-200: paper / webpage / dataset (1Go)Pascal3D+ Cars: paper / webpage (including ftp download link, 7.5Go) / UCMR annotations (thanks to the UCMR team for releasing them)CompCars: paper / webpage / dataset (12Go, thanks to the GIRAFFE team for hosting the data)LSUN: paper / webpage / horse dataset (69Go) / moto dataset (42Go)
3. Download pretrained models
⬇️
bash scripts/download_model.sh
This command will download one of the following models:
car.pkltrained on CompCars: gdrive linkbird.pkltrained on CUB-200: gdrive linkmoto.pkltrained on LSUN Motorbike: gdrive linkhorse.pkltrained on LSUN Horse: gdrive linksn_*.pkltrained on each ShapeNet category: airplane, bench, cabinet, car, chair, display, lamp, phone, rifle, sofa, speaker, table, vessel
NB: it may happen that gdown hangs, if so you can download them manually with the gdrive links and move them to the models folder.
How to use
🚀
1. 3D reconstruction of car images
🚘
You first need to download the car model (see above), then launch:
cuda=gpu_id model=car.pkl input=demo ./scripts/reconstruct.sh
where:
gpu_idis a target cuda device id,car.pklcorresponds to a pretrained model,demois a folder containing the target images.
It will create a folder demo_rec containing the reconstructed meshes (.obj format + gif visualizations).
2. Reproduce our results
📊
To launch a training from scratch, run:
cuda=gpu_id config=filename.yml tag=run_tag ./scripts/pipeline.sh
where:
gpu_idis a target cuda device id,filename.ymlis a YAML config located inconfigsfolder,run_tagis a tag for the experiment.
Results are saved at runs/${DATASET}/${DATE}_${run_tag} where DATASET is the dataset name specified in filename.yml and DATE is the current date in mmdd format. Some training visual results like reconstruction examples will be saved. Available configs are:
sn/*.ymlfor each ShapeNet categorycar.ymlfor CompCars datasetcub.ymlfor CUB-200 datasethorse.ymlfor LSUN Horse datasetmoto.ymlfor LSUN Motorbike datasetp3d_car.ymlfor Pascal3D+ Car dataset
3. Train on a custom dataset
🔮
If you want to learn a model for a custom object category, here are the key things you need to do:
- put your images in a
custom_namefolder inside thedatasetsfolder - write a config
custom.ymlwithcustom_nameasdataset.nameand move it to theconfigsfolder: as a rule of thumb for the progressive conditioning milestones, put the number of epochs corresponding to 500k iterations for each stage - launch training with:
cuda=gpu_id config=custom.yml tag=custom_run_tag ./scripts/pipeline.sh
Further information
📚
If you like this project, check out related works from our group:
- Monnier et al. - Unsupervised Layered Image Decomposition into Object Prototypes (ICCV 2021)
- Monnier et al. - Deep Transformation Invariant Clustering (NeurIPS 2020)
- Deprelle et al. - Learning elementary structures for 3D shape generation and matching (NeurIPS 2019)
- Groueix et al. - AtlasNet: A Papier-Mache Approach to Learning 3D Surface Generation (CVPR 2018)





