Source code for ZePHyR: Zero-shot Pose Hypothesis Rating @ ICRA 2021

Overview

ZePHyR: Zero-shot Pose Hypothesis Rating

ZePHyR is a zero-shot 6D object pose estimation pipeline. The core is a learned scoring function that compares the sensor observation to a sparse object rendering of each candidate pose hypothesis. We used PointNet++ as the network structure and trained and tested on YCB-V and LM-O dataset.

[ArXiv] [Project Page] [Video] [BibTex]

ZePHyR pipeline animation

Get Started

First, checkout this repo by

git clone --recurse-submodules [email protected]:r-pad/zephyr.git

Set up environment

  1. We recommend building the environment and install all required packages using Anaconda.
conda env create -n zephyr --file zephyr_env.yml
conda activate zephyr
  1. Install the required packages for compiling the C++ module
sudo apt-get install build-essential cmake libopencv-dev python-numpy
  1. Compile the c++ library for python bindings in the conda virtual environment
mkdir build
cd build
cmake .. -DPYTHON_EXECUTABLE=$(python -c "import sys; print(sys.executable)") -DPYTHON_INCLUDE_DIR=$(python -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())")  -DPYTHON_LIBRARY=$(python -c "import distutils.sysconfig as sysconfig; print(sysconfig.get_config_var('LIBDIR'))")
make; make install
  1. Install the current python package
cd .. # move to the root folder of this repo
pip install -e .

Download pre-processed dataset

Download pre-processed training and testing data (ycbv_preprocessed.zip, lmo_preprocessed.zip and ppf_hypos.zip) from this Google Drive link and unzip it in the python/zephyr/data folder. The unzipped data takes around 66GB of storage in total.

The following commands need to be run in python/zephyr/ folder.

cd python/zephyr/

Example script to run the network

To use the network, an example is provided in notebooks/TestExample.ipynb. In the example script, a datapoint is loaded from LM-O dataset provided by the BOP Challenge. The pose hypotheses is provided by PPF algorithm (extracted from ppf_hypos.zip). Despite the complex dataloading code, only the following data of the observation and the model point clouds is needed to run the network:

  • img: RGB image, np.ndarray of size (H, W, 3) in np.uint8
  • depth: depth map, np.ndarray of size (H, W) in np.float, in meters
  • cam_K: camera intrinsic matrix, np.ndarray of size (3, 3) in np.float
  • model_colors: colors of model point cloud, np.ndarray of size (N, 3) in float, scaled in [0, 1]
  • model_points: xyz coordinates of model point cloud, np.ndarray of size (N, 3) in float, in meters
  • model_normals: normal vectors of mdoel point cloud, np.ndarray of size (N, 3) in float, each L2 normalized
  • pose_hypos: pose hypotheses in camera frame, np.ndarray of size (K, 4, 4) in float

Run PPF algorithm using HALCON software

The PPF algorithm we used is the surface matching function implmemented in MVTec HALCON software. HALCON provides a Python interface for programmers together with its newest versions. I wrote a simple wrapper which calls create_surface_model() and find_surface_model() to get the pose hypotheses. See notebooks/TestExample.ipynb for how to use it.

The wrapper requires the HALCON 21.05 to be installed, which is a commercial software but it provides free licenses for students.

If you don't have access to HALCON, sets of pre-estimated pose hypotheses are provided in the pre-processed dataset.

Test the network

Download the pretrained pytorch model checkpoint from this Google Drive link and unzip it in the python/zephyr/ckpts/ folder. We provide 3 checkpoints, two trained on YCB-V objects with odd ID (final_ycbv.ckpt) and even ID (final_ycbv_valodd.ckpt) respectively, and one trained on LM objects that are not in LM-O dataset (final_lmo.ckpt).

Test on YCB-V dataset

Test on the YCB-V dataset using the model trained on objects with odd ID

python test.py \
    --model_name pn2 \
    --dataset_root ./data/ycb/matches_data_test/ \
    --dataset_name ycbv \
    --dataset HSVD_diff_uv_norm \
    --no_valid_proj --no_valid_depth \
    --loss_cutoff log \
    --exp_name final \
    --resume_path ./ckpts/final_ycbv.ckpt

Test on the YCB-V dataset using the model trained on objects with even ID

python test.py \
    --model_name pn2 \
    --dataset_root ./data/ycb/matches_data_test/ \
    --dataset_name ycbv \
    --dataset HSVD_diff_uv_norm \
    --no_valid_proj --no_valid_depth \
    --loss_cutoff log \
    --exp_name final \
    --resume_path ./ckpts/final_ycbv_valodd.ckpt

Test on LM-O dataset

python test.py \
    --model_name pn2 \
    --dataset_root ./data/lmo/matches_data_test/ \
    --dataset_name lmo \
    --dataset HSVD_diff_uv_norm \
    --no_valid_proj --no_valid_depth \
    --loss_cutoff log \
    --exp_name final \
    --resume_path ./ckpts/final_lmo.ckpt

The testing results will be stored in test_logs and the results in BOP Challenge format will be in test_logs/bop_results. Please refer to bop_toolkit for converting the results to BOP Average Recall scores used in BOP challenge.

Train the network

Train on YCB-V dataset

These commands will train the network on the real-world images in the YCB-Video training set.

On object Set 1 (objects with odd ID)

python train.py \
    --model_name pn2 \
    --dataset_root ./data/ycb/matches_data_train/ \
    --dataset_name ycbv \
    --dataset HSVD_diff_uv_norm \
    --no_valid_proj --no_valid_depth \
    --loss_cutoff log \
    --exp_name final

On object Set 2 (objects with even ID)

python train.py \
    --model_name pn2 \
    --dataset_root ./data/ycb/matches_data_train/ \
    --dataset_name ycbv \
    --dataset HSVD_diff_uv_norm \
    --no_valid_proj --no_valid_depth \
    --loss_cutoff log \
    --val_obj odd \
    --exp_name final_valodd

Train on LM-O synthetic dataset

This command will train the network on the synthetic images provided by BlenderProc4BOP. We take the lm_train_pbr.zip as the training set but the network is only supervised on objects that is in Linemod but not in Linemod-Occluded (i.e. IDs for training objects are 2 3 4 7 13 14 15).

python train.py \
    --model_name pn2 \
    --dataset_root ./data/lmo/matches_data_train/ \
    --dataset_name lmo \
    --dataset HSVD_diff_uv_norm \
    --no_valid_proj --no_valid_depth \
    --loss_cutoff log \
    --exp_name final

Cite

If you find this codebase useful in your research, please consider citing:

@inproceedings{icra2021zephyr,
    title={ZePHyR: Zero-shot Pose Hypothesis Rating},
    author={Brian Okorn, Qiao Gu, Martial Hebert, David Held},
    booktitle={2021 International Conference on Robotics and Automation (ICRA)},
    year={2021}
}

Reference

Owner
R-Pad - Robots Perceiving and Doing
This is the repository for the R-Pad lab at CMU.
R-Pad - Robots Perceiving and Doing
Header-only library for using Keras models in C++.

frugally-deep Use Keras models in C++ with ease Table of contents Introduction Usage Performance Requirements and Installation FAQ Introduction Would

Tobias Hermann 927 Jan 05, 2023
Emulation and Feedback Fuzzing of Firmware with Memory Sanitization

BaseSAFE This repository contains the BaseSAFE Rust APIs, introduced by "BaseSAFE: Baseband SAnitized Fuzzing through Emulation". The example/ directo

Security in Telecommunications 138 Dec 16, 2022
A few stylization coreML models that I've trained with CreateML

CoreML-StyleTransfer A few stylization coreML models that I've trained with CreateML You can open and use the .mlmodel files in the "models" folder in

Doron Adler 8 Aug 18, 2022
A privacy-focused, intelligent security camera system.

Self-Hosted Home Security Camera System A privacy-focused, intelligent security camera system. Features: Multi-camera support w/ minimal configuration

Scott Barnes 175 Jan 01, 2023
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

107 Dec 02, 2022
A vision library for performing sliced inference on large images/small objects

SAHI: Slicing Aided Hyper Inference A vision library for performing sliced inference on large images/small objects Overview Object detection and insta

Open Business Software Solutions 2.3k Jan 04, 2023
implementation of the paper "MarginGAN: Adversarial Training in Semi-Supervised Learning"

MarginGAN This repository is the implementation of the paper "MarginGAN: Adversarial Training in Semi-Supervised Learning". 1."preliminary" is the imp

Van 7 Dec 23, 2022
A coin flip game in which you can put the amount of money below or equal to 1000 and then choose heads or tail

COIN_FLIPPY ##This is a simple example package. You can use Github-flavored Markdown to write your content. Coinflippy A coin flip game in which you c

2 Dec 26, 2021
ImageBART: Bidirectional Context with Multinomial Diffusion for Autoregressive Image Synthesis

ImageBART NeurIPS 2021 Patrick Esser*, Robin Rombach*, Andreas Blattmann*, Björn Ommer * equal contribution arXiv | BibTeX | Poster Requirements A sui

CompVis Heidelberg 110 Jan 01, 2023
Code repo for "Cross-Scale Internal Graph Neural Network for Image Super-Resolution" (NeurIPS'20)

IGNN Code repo for "Cross-Scale Internal Graph Neural Network for Image Super-Resolution" [paper] [supp] Prepare datasets 1 Download training dataset

Shangchen Zhou 278 Jan 03, 2023
This repo is a C++ version of yolov5_deepsort_tensorrt. Packing all C++ programs into .so files, using Python script to call C++ programs further.

yolov5_deepsort_tensorrt_cpp Introduction This repo is a C++ version of yolov5_deepsort_tensorrt. And packing all C++ programs into .so files, using P

41 Dec 27, 2022
A PyTorch Library for Accelerating 3D Deep Learning Research

Kaolin: A Pytorch Library for Accelerating 3D Deep Learning Research Overview NVIDIA Kaolin library provides a PyTorch API for working with a variety

NVIDIA GameWorks 3.5k Jan 07, 2023
Tutorial in Python targeted at Epidemiologists. Will discuss the basics of analysis in Python 3

Python-for-Epidemiologists This repository is an introduction to epidemiology analyses in Python. Additionally, the tutorials for my library zEpid are

Paul Zivich 120 Nov 17, 2022
A library of scripts that interact with the PythonTurtle module to create games, drawings, and more

TurtleLib TurtleLib is a library of scripts that interact with the PythonTurtle module to create games, drawings, and more! Using the Scripts Copy or

1 Jan 15, 2022
4th place solution to datafactory challenge by Intermarché.

Solution to Datafactory challenge by Intermarché. 4th place solution to datafactory challenge by Intermarché. The objective of the challenge is to pre

Raphael Sourty 11 Mar 19, 2022
This repository is an unoffical PyTorch implementation of Medical segmentation in 3D and 2D.

Pytorch Medical Segmentation Read Chinese Introduction:Here! Recent Updates 2021.1.8 The train and test codes are released. 2021.2.6 A bug in dice was

EasyCV-Ellis 618 Dec 27, 2022
Data Augmentation with Variational Autoencoders

Documentation Pyraug This library provides a way to perform Data Augmentation using Variational Autoencoders in a reliable way even in challenging con

112 Nov 30, 2022
Gesture Volume Control v.2

Gesture volume control v.2 In this project I am going to learn how to use Gesture Control to change the volume of a computer. I first look into hand t

Pavel Dat 23 Dec 26, 2022
A simplified framework and utilities for PyTorch

Here is Poutyne. Poutyne is a simplified framework for PyTorch and handles much of the boilerplating code needed to train neural networks. Use Poutyne

GRAAL/GRAIL 534 Dec 17, 2022
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

Thalles Silva 1.7k Dec 28, 2022