GPU Accelerated Non-rigid ICP for surface registration

Overview

GPU Accelerated Non-rigid ICP for surface registration

Introduction

Preivous Non-rigid ICP algorithm is usually implemented on CPU, and needs to solve sparse least square problem, which is time consuming. In this repo, we implement a pytorch version NICP algorithm based on paper Amberg et al. Detailedly, we leverage the AMSGrad to optimize the linear regresssion, and then found nearest points iteratively. Additionally, we smooth the calculated mesh with laplacian smoothness term. With laplacian smoothness term, the wireframe is also more neat.


Quick Start

install

We use python3.8 and cuda10.2 for implementation. The code is tested on Ubuntu 20.04.

  • The pytorch3d cannot be installed directly from pip install pytorch3d, for the installation of pytorch3d, see pytorch3d.
  • For other packages, run
pip install -r requirements.txt
  • For the template face model, currently we use a processed version of BFM face model from 3DMMfitting-pytorch, download the BFM09_model_info.mat from 3DMMfitting-pytorch and put it into the ./BFM folder.
  • For demo, run
python demo_nicp.py

we show demo for NICP mesh2mesh and NICP mesh2pointcloud. We have two param sets for registration:

milestones = set([50, 80, 100, 110, 120, 130, 140])
stiffness_weights = np.array([50, 20, 5, 2, 0.8, 0.5, 0.35, 0.2])
landmark_weights = np.array([5, 2, 0.5, 0, 0, 0, 0, 0])

This param set is used for registration on fine grained mesh

milestones = set([50, 100])
stiffness_weights = np.array([50, 20, 5])
landmark_weights = np.array([50, 20, 5])

This param set is used for registration on noisy point clouds

Templated Model

You can also use your own templated face model with manually specified landmarks.

Todo

Currently we write some batchwise functions, but batchwise NICP is not supported now. We will support batch NICP in further releases.

You might also like...
High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.

Anakin2.0 Welcome to the Anakin GitHub. Anakin is a cross-platform, high-performance inference engine, which is originally developed by Baidu engineer

GrabGpu_py: a scripts for grab gpu when gpu is free

GrabGpu_py a scripts for grab gpu when gpu is free. WaitCondition: gpu_memory

A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection
A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection

Confluence: A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection 1. 介绍 用以替代 NMS,在所有 bbox 中挑选出最优的集合。 NMS 仅考虑了 bbox 的得分,然后根据 IOU 来

A non-linear, non-parametric Machine Learning method capable of modeling complex datasets
A non-linear, non-parametric Machine Learning method capable of modeling complex datasets

Fast Symbolic Regression Symbolic Regression is a non-linear, non-parametric Machine Learning method capable of modeling complex data sets. fastsr aim

Code for
Code for "Learning to Segment Rigid Motions from Two Frames".

rigidmask Code for "Learning to Segment Rigid Motions from Two Frames". ** This is a partial release with inference and evaluation code.

Weakly Supervised Learning of Rigid 3D Scene Flow
Weakly Supervised Learning of Rigid 3D Scene Flow

Weakly Supervised Learning of Rigid 3D Scene Flow This repository provides code and data to train and evaluate a weakly supervised method for rigid 3D

Official PyTorch implementation of CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds
Official PyTorch implementation of CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds

CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds Introduction This is the official PyTorch implementation of o

Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators
Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators

Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators. It's also a suite of learning algorithms to train agents to operate in these environments (PPO, SAC, evolutionary strategy, and direct trajectory optimization are implemented).

Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..
Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..

ARAPReg Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators.. Installation The cod

Comments
  • Lack of file “BFM09_model_info.mat”

    Lack of file “BFM09_model_info.mat”

    Traceback (most recent call last): File "demo_nicp.py", line 28, in bfm_meshes, bfm_lm_index = load_bfm_model(torch.device('cuda:0')) File "/data/pytorch-nicp/bfm_model.py", line 15, in load_bfm_model bfm_meta_data = loadmat('BFM/BFM09_model_info.mat') File "/root/anaconda3/envs/pytorch3d/lib/python3.8/site-packages/scipy/io/matlab/mio.py", line 224, in loadmat with _open_file_context(file_name, appendmat) as f: File "/root/anaconda3/envs/pytorch3d/lib/python3.8/contextlib.py", line 113, in enter return next(self.gen) File "/root/anaconda3/envs/pytorch3d/lib/python3.8/site-packages/scipy/io/matlab/mio.py", line 17, in _open_file_context f, opened = _open_file(file_like, appendmat, mode) File "/root/anaconda3/envs/pytorch3d/lib/python3.8/site-packages/scipy/io/matlab/mio.py", line 45, in _open_file return open(file_like, mode), True FileNotFoundError: [Errno 2] No such file or directory: 'BFM/BFM09_model_info.mat'

    In 3DMMfitting-pytorch, there are only these files: BFM_exp_idx.mat BFM_front_idx.mat facemodel_info.mat README.md select_vertex_id.mat similarity_Lm3D_all.mat std_exp.txt

    opened by 675492062 2
  • What is the expected time needed for running demo_nicp.py?

    What is the expected time needed for running demo_nicp.py?

    Hello,

    On my computer it seems quite slow to run demo_nicp.py. At least it took more than 1 minutes to get final.obj. Is it correct?

    I ranAMM_NRR for non-rigit ICP registration with two 7000 vertices meshes. It needs ca 1 second with CPU on my computer. With GPU, it might be possible to do the same work in less than 100 ms?

    Thank you!

    opened by 1939938853 0
  • Hi, with landmarks: `landmarks = torch.from_numpy(np.array(landmarks)).to(device).long()`, maybe you can  reshape landmarks from torch.Size([1, 1, 68, 2]) to  torch.Size([1, 68, 2])

    Hi, with landmarks: `landmarks = torch.from_numpy(np.array(landmarks)).to(device).long()`, maybe you can reshape landmarks from torch.Size([1, 1, 68, 2]) to torch.Size([1, 68, 2])

    Hi, with landmarks: landmarks = torch.from_numpy(np.array(landmarks)).to(device).long(), maybe you can reshape landmarks from torch.Size([1, 1, 68, 2]) to torch.Size([1, 68, 2])

    Originally posted by @wuhaozhe in https://github.com/wuhaozhe/pytorch-nicp/issues/3#issuecomment-971453681 hi!I got output as torch.Size([1, 68, 512, 3]) torch.Size([1, 68, 2]) torch.Size([1, 512, 512, 3]) I think the shape of following tensors are right, but I meet the same problem. lm_vertex = torch.gather(lm_vertex, 2, column_index) RuntimeError: CUDA error: device-side assert triggered

    landmarks = torch.from_numpy(np.array(landmarks)).to(device).long()
    
    row_index = landmarks[:, :, 1].view(landmarks.shape[0], -1)
    column_index = landmarks[:, :, 0].view(landmarks.shape[0], -1)
    row_index = row_index.unsqueeze(2).unsqueeze(3).expand(landmarks.shape[0], landmarks.shape[1], shape_img.shape[2], shape_img.shape[3])
    column_index = column_index.unsqueeze(1).unsqueeze(3).expand(landmarks.shape[0], landmarks.shape[1], landmarks.shape[1], shape_img.shape[3])
    print(row_index.shape, landmarks.shape, shape_img.shape)
    
    opened by alicedingyueming 1
  • RuntimeError

    RuntimeError

    Traceback (most recent call last): File "demo_nicp.py", line 27, in target_lm_index, lm_mask = get_mesh_landmark(norm_meshes, dummy_render) File "/data/pytorch-nicp/landmark.py", line 37, in get_mesh_landmark row_index = row_index.unsqueeze(2).unsqueeze(3).expand(landmarks.shape[0], landmarks.shape[1], shape_img.shape[2], shape_img.shape[3]) RuntimeError: The expanded size of the tensor (1) must match the existing size (2) at non-singleton dimension 1. Target sizes: [1, 1, 512, 3]. Tensor sizes: [1, 2, 1, 1]

    I have already configure the environment,but it seems have some problems in the code.What can I do to solve this problem.

    opened by 675492062 8
Releases(v0.1)
Owner
Haozhe Wu
Research interests in Computer Vision and Machine Learning.
Haozhe Wu
Automatic voice-synthetised summaries of latest research papers on arXiv

PaperWhisperer PaperWhisperer is a Python application that keeps you up-to-date with research papers. How? It retrieves the latest articles from arXiv

Valerio Velardo 124 Dec 20, 2022
A PyTorch implementation of "From Two to One: A New Scene Text Recognizer with Visual Language Modeling Network" (ICCV2021)

From Two to One: A New Scene Text Recognizer with Visual Language Modeling Network The official code of VisionLAN (ICCV2021). VisionLAN successfully a

81 Dec 12, 2022
Probabilistic Tracklet Scoring and Inpainting for Multiple Object Tracking

Probabilistic Tracklet Scoring and Inpainting for Multiple Object Tracking (CVPR 2021) Pytorch implementation of the ArTIST motion model. In this repo

Fatemeh 38 Dec 12, 2022
A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation

Segnet is deep fully convolutional neural network architecture for semantic pixel-wise segmentation. This is implementation of http://arxiv.org/pdf/15

Pradyumna Reddy Chinthala 190 Dec 15, 2022
RNN Predict Street Commercial Vitality

RNN-for-Predicting-Street-Vitality Code and dataset for Predicting the Vitality of Stores along the Street based on Business Type Sequence via Recurre

Zidong LIU 1 Dec 15, 2021
RLHive: a framework designed to facilitate research in reinforcement learning.

RLHive is a framework designed to facilitate research in reinforcement learning. It provides the components necessary to run a full RL experiment, for both single agent and multi agent environments.

88 Jan 05, 2023
Effective Use of Transformer Networks for Entity Tracking

Effective Use of Transformer Networks for Entity Tracking (EMNLP19) This is a PyTorch implementation of our EMNLP paper on the effectiveness of pre-tr

5 Nov 06, 2021
Deep Learning segmentation suite designed for 2D microscopy image segmentation

Deep Learning segmentation suite dessigned for 2D microscopy image segmentation This repository provides researchers with a code to try different enco

7 Nov 03, 2022
A PyTorch toolkit for 2D Human Pose Estimation.

PyTorch-Pose PyTorch-Pose is a PyTorch implementation of the general pipeline for 2D single human pose estimation. The aim is to provide the interface

Wei Yang 1.1k Dec 30, 2022
OBBDetection is a oriented object detection library, which is based on MMdetection.

OBBDetection news: We are now updating OBBDetection to new vision based on MMdetection v2.10, which has more advanced models and more efficient featur

jbwang1997 401 Jan 02, 2023
Churn-Prediction-Project - In this project, a churn prediction model is developed for a private bank as a term project for Data Mining class.

Churn-Prediction-Project In this project, a churn prediction model is developed for a private bank as a term project for Data Mining class. Project in

1 Jan 03, 2022
AgML is a comprehensive library for agricultural machine learning

AgML is a comprehensive library for agricultural machine learning. Currently, AgML provides access to a wealth of public agricultural datasets for common agricultural deep learning tasks.

Plant AI and Biophysics Lab 1 Jul 07, 2022
Creating predictive checklists from data using integer programming.

Learning Optimal Predictive Checklists A Python package to learn simple predictive checklists from data subject to customizable constraints. For more

Healthy ML 5 Apr 19, 2022
Trash Sorter Extraordinaire is a software which efficiently detects the different types of waste in a pile of random trash through feeding it pictures or videos.

Trash-Sorter-Extraordinaire Trash Sorter Extraordinaire is a software which efficiently detects the different types of waste in a pile of random trash

Rameen Mahmood 1 Nov 07, 2021
Ground truth data for the Optical Character Recognition of Historical Classical Commentaries.

OCR Ground Truth for Historical Commentaries The dataset OCR ground truth for historical commentaries (GT4HistComment) was created from the public dom

Ajax Multi-Commentary 3 Sep 08, 2022
UPSNet: A Unified Panoptic Segmentation Network

UPSNet: A Unified Panoptic Segmentation Network Introduction UPSNet is initially described in a CVPR 2019 oral paper. Disclaimer This repository is te

Uber Research 622 Dec 26, 2022
Python-kafka-reset-consumergroup-offset-example - Python Kafka reset consumergroup offset example

Python Kafka reset consumergroup offset example This is a simple example of how

Willi Carlsen 1 Feb 16, 2022
Hand tracking demo for DIY Smart Glasses with a remote computer doing the work

CameraStream This is a demonstration that streams the image from smartglasses to a pc, does the hand recognition on the remote pc and streams the proc

Teemu Laurila 20 Oct 13, 2022
Automatic packaging of the open-composite libs for OvGME

OvGME Packager for OpenXR – OpenComposite for DCS Note This repository is currently unsupported and needs to be migrated to the upstream OpenComposite

12 Nov 03, 2022
Doubly Robust Off-Policy Evaluation for Ranking Policies under the Cascade Behavior Model

Doubly Robust Off-Policy Evaluation for Ranking Policies under the Cascade Behavior Model About This repository contains the code to replicate the syn

Haruka Kiyohara 12 Dec 07, 2022