Source code for ZePHyR: Zero-shot Pose Hypothesis Rating @ ICRA 2021

Overview

ZePHyR: Zero-shot Pose Hypothesis Rating

ZePHyR is a zero-shot 6D object pose estimation pipeline. The core is a learned scoring function that compares the sensor observation to a sparse object rendering of each candidate pose hypothesis. We used PointNet++ as the network structure and trained and tested on YCB-V and LM-O dataset.

[ArXiv] [Project Page] [Video] [BibTex]

ZePHyR pipeline animation

Get Started

First, checkout this repo by

git clone --recurse-submodules [email protected]:r-pad/zephyr.git

Set up environment

  1. We recommend building the environment and install all required packages using Anaconda.
conda env create -n zephyr --file zephyr_env.yml
conda activate zephyr
  1. Install the required packages for compiling the C++ module
sudo apt-get install build-essential cmake libopencv-dev python-numpy
  1. Compile the c++ library for python bindings in the conda virtual environment
mkdir build
cd build
cmake .. -DPYTHON_EXECUTABLE=$(python -c "import sys; print(sys.executable)") -DPYTHON_INCLUDE_DIR=$(python -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())")  -DPYTHON_LIBRARY=$(python -c "import distutils.sysconfig as sysconfig; print(sysconfig.get_config_var('LIBDIR'))")
make; make install
  1. Install the current python package
cd .. # move to the root folder of this repo
pip install -e .

Download pre-processed dataset

Download pre-processed training and testing data (ycbv_preprocessed.zip, lmo_preprocessed.zip and ppf_hypos.zip) from this Google Drive link and unzip it in the python/zephyr/data folder. The unzipped data takes around 66GB of storage in total.

The following commands need to be run in python/zephyr/ folder.

cd python/zephyr/

Example script to run the network

To use the network, an example is provided in notebooks/TestExample.ipynb. In the example script, a datapoint is loaded from LM-O dataset provided by the BOP Challenge. The pose hypotheses is provided by PPF algorithm (extracted from ppf_hypos.zip). Despite the complex dataloading code, only the following data of the observation and the model point clouds is needed to run the network:

  • img: RGB image, np.ndarray of size (H, W, 3) in np.uint8
  • depth: depth map, np.ndarray of size (H, W) in np.float, in meters
  • cam_K: camera intrinsic matrix, np.ndarray of size (3, 3) in np.float
  • model_colors: colors of model point cloud, np.ndarray of size (N, 3) in float, scaled in [0, 1]
  • model_points: xyz coordinates of model point cloud, np.ndarray of size (N, 3) in float, in meters
  • model_normals: normal vectors of mdoel point cloud, np.ndarray of size (N, 3) in float, each L2 normalized
  • pose_hypos: pose hypotheses in camera frame, np.ndarray of size (K, 4, 4) in float

Run PPF algorithm using HALCON software

The PPF algorithm we used is the surface matching function implmemented in MVTec HALCON software. HALCON provides a Python interface for programmers together with its newest versions. I wrote a simple wrapper which calls create_surface_model() and find_surface_model() to get the pose hypotheses. See notebooks/TestExample.ipynb for how to use it.

The wrapper requires the HALCON 21.05 to be installed, which is a commercial software but it provides free licenses for students.

If you don't have access to HALCON, sets of pre-estimated pose hypotheses are provided in the pre-processed dataset.

Test the network

Download the pretrained pytorch model checkpoint from this Google Drive link and unzip it in the python/zephyr/ckpts/ folder. We provide 3 checkpoints, two trained on YCB-V objects with odd ID (final_ycbv.ckpt) and even ID (final_ycbv_valodd.ckpt) respectively, and one trained on LM objects that are not in LM-O dataset (final_lmo.ckpt).

Test on YCB-V dataset

Test on the YCB-V dataset using the model trained on objects with odd ID

python test.py \
    --model_name pn2 \
    --dataset_root ./data/ycb/matches_data_test/ \
    --dataset_name ycbv \
    --dataset HSVD_diff_uv_norm \
    --no_valid_proj --no_valid_depth \
    --loss_cutoff log \
    --exp_name final \
    --resume_path ./ckpts/final_ycbv.ckpt

Test on the YCB-V dataset using the model trained on objects with even ID

python test.py \
    --model_name pn2 \
    --dataset_root ./data/ycb/matches_data_test/ \
    --dataset_name ycbv \
    --dataset HSVD_diff_uv_norm \
    --no_valid_proj --no_valid_depth \
    --loss_cutoff log \
    --exp_name final \
    --resume_path ./ckpts/final_ycbv_valodd.ckpt

Test on LM-O dataset

python test.py \
    --model_name pn2 \
    --dataset_root ./data/lmo/matches_data_test/ \
    --dataset_name lmo \
    --dataset HSVD_diff_uv_norm \
    --no_valid_proj --no_valid_depth \
    --loss_cutoff log \
    --exp_name final \
    --resume_path ./ckpts/final_lmo.ckpt

The testing results will be stored in test_logs and the results in BOP Challenge format will be in test_logs/bop_results. Please refer to bop_toolkit for converting the results to BOP Average Recall scores used in BOP challenge.

Train the network

Train on YCB-V dataset

These commands will train the network on the real-world images in the YCB-Video training set.

On object Set 1 (objects with odd ID)

python train.py \
    --model_name pn2 \
    --dataset_root ./data/ycb/matches_data_train/ \
    --dataset_name ycbv \
    --dataset HSVD_diff_uv_norm \
    --no_valid_proj --no_valid_depth \
    --loss_cutoff log \
    --exp_name final

On object Set 2 (objects with even ID)

python train.py \
    --model_name pn2 \
    --dataset_root ./data/ycb/matches_data_train/ \
    --dataset_name ycbv \
    --dataset HSVD_diff_uv_norm \
    --no_valid_proj --no_valid_depth \
    --loss_cutoff log \
    --val_obj odd \
    --exp_name final_valodd

Train on LM-O synthetic dataset

This command will train the network on the synthetic images provided by BlenderProc4BOP. We take the lm_train_pbr.zip as the training set but the network is only supervised on objects that is in Linemod but not in Linemod-Occluded (i.e. IDs for training objects are 2 3 4 7 13 14 15).

python train.py \
    --model_name pn2 \
    --dataset_root ./data/lmo/matches_data_train/ \
    --dataset_name lmo \
    --dataset HSVD_diff_uv_norm \
    --no_valid_proj --no_valid_depth \
    --loss_cutoff log \
    --exp_name final

Cite

If you find this codebase useful in your research, please consider citing:

@inproceedings{icra2021zephyr,
    title={ZePHyR: Zero-shot Pose Hypothesis Rating},
    author={Brian Okorn, Qiao Gu, Martial Hebert, David Held},
    booktitle={2021 International Conference on Robotics and Automation (ICRA)},
    year={2021}
}

Reference

Owner
R-Pad - Robots Perceiving and Doing
This is the repository for the R-Pad lab at CMU.
R-Pad - Robots Perceiving and Doing
PyTorch implementation of EfficientNetV2

[NEW!] Check out our latest work involution accepted to CVPR'21 that introduces a new neural operator, other than convolution and self-attention. PyTo

Duo Li 375 Jan 03, 2023
PyTorch implementation of SQN based on CloserLook3D's encoder

SQN_pytorch This repo is an implementation of Semantic Query Network (SQN) using CloserLook3D's encoder in Pytorch. For TensorFlow implementation, che

PointCloudYC 1 Oct 21, 2021
[WACV 2020] Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints

Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints Official implementation for Reducing Footskate in Human Motion Recon

Virginia Tech Vision and Learning Lab 38 Nov 01, 2022
The MATH Dataset

Measuring Mathematical Problem Solving With the MATH Dataset This is the repository for Measuring Mathematical Problem Solving With the MATH Dataset b

Dan Hendrycks 267 Dec 26, 2022
PyTorch implementation of our method for adversarial attacks and defenses in hyperspectral image classification.

Self-Attention Context Network for Hyperspectral Image Classification PyTorch implementation of our method for adversarial attacks and defenses in hyp

22 Dec 02, 2022
Codes for "Solving Long-tailed Recognition with Deep Realistic Taxonomic Classifier"

Deep-RTC [project page] This repository contains the source code accompanying our ECCV 2020 paper. Solving Long-tailed Recognition with Deep Realistic

Gina Wu 16 May 26, 2022
Exporter for Storage Area Network (SAN)

SAN Exporter Prometheus exporter for Storage Area Network (SAN). We all know that each SAN Storage vendor has their own glossary of terms, health/perf

vCloud 32 Dec 16, 2022
Continuous Diffusion Graph Neural Network

We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE.

Twitter Research 227 Jan 05, 2023
Rename Images with Auto Generated Neural Image Captions

Recaption Images with Generated Neural Image Caption Example Usage: Commandline: Recaption all images from folder /home/feng/Downloads/images to folde

feng wang 3 May 01, 2022
Artstation-Artistic-face-HQ Dataset (AAHQ)

Artstation-Artistic-face-HQ Dataset (AAHQ) Artstation-Artistic-face-HQ (AAHQ) is a high-quality image dataset of artistic-face images. It is proposed

onion 105 Dec 16, 2022
SAT: 2D Semantics Assisted Training for 3D Visual Grounding, ICCV 2021 (Oral)

SAT: 2D Semantics Assisted Training for 3D Visual Grounding SAT: 2D Semantics Assisted Training for 3D Visual Grounding by Zhengyuan Yang, Songyang Zh

Zhengyuan Yang 22 Nov 30, 2022
Algo-burn - Script to configure an Algorand address as a "burn" address for one or more ASA tokens

Algorand Burn Address This is a simple script to illustrate how a "burn address"

GSD 5 May 10, 2022
This repository contains the code for our paper VDA (public in EMNLP2021 main conference)

Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models This repository contains the code for our paper VDA (publ

RUCAIBox 13 Aug 06, 2022
Minimal implementation of Denoised Smoothing: A Provable Defense for Pretrained Classifiers in TensorFlow.

Denoised-Smoothing-TF Minimal implementation of Denoised Smoothing: A Provable Defense for Pretrained Classifiers in TensorFlow. Denoised Smoothing is

Sayak Paul 19 Dec 11, 2022
[ICCV'21] NEAT: Neural Attention Fields for End-to-End Autonomous Driving

NEAT: Neural Attention Fields for End-to-End Autonomous Driving Paper | Supplementary | Video | Poster | Blog This repository is for the ICCV 2021 pap

254 Jan 02, 2023
PyTorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision.

PyTorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision @misc{CV2018, author = {Donny You ( Donny You 40 Sep 14, 2022

A PyTorch Implementation of SphereFace.

SphereFace A PyTorch Implementation of SphereFace. The code can be trained on CASIA-Webface and the best accuracy on LFW is 99.22%. SphereFace: Deep H

carwin 685 Dec 09, 2022
A deep learning model for style-specific music generation.

DeepJ: A model for style-specific music generation https://arxiv.org/abs/1801.00887 Abstract Recent advances in deep neural networks have enabled algo

Henry Mao 704 Nov 23, 2022
Faster RCNN with PyTorch

Faster RCNN with PyTorch Note: I re-implemented faster rcnn in this project when I started learning PyTorch. Then I use PyTorch in all of my projects.

Long Chen 1.6k Dec 23, 2022
A simple PyTorch Implementation of Generative Adversarial Networks, focusing on anime face drawing.

AnimeGAN A simple PyTorch Implementation of Generative Adversarial Networks, focusing on anime face drawing. Randomly Generated Images The images are

Jie Lei 雷杰 1.2k Jan 03, 2023