A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021)

Related tags

Deep LearningA-SDF
Overview

A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021)

This repository contains the official implementation for A-SDF introduced in the following paper: A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021). The code is developed based on the Pytorch framework(1.6.0) with python 3.7.6. This repo includes training code and generated data from shape2motion.

A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021)
JitengMu, Weichao Qiu, Adam Kortylewski, Alan Yuille, Nuno Vasconcelos, Xiaolong Wang
ICCV 2021

The project page with more details is at https://jitengmu.github.io/A-SDF/.

Citation

If you find our code or method helpful, please use the following BibTex entry.

@article{mu2021asdf,
  author    = {Jiteng Mu and
               Weichao Qiu and
               Adam Kortylewski and
               Alan L. Yuille and
               Nuno Vasconcelos and
               Xiaolong Wang},
  title     = {{A-SDF:} Learning Disentangled Signed Distance Functions for Articulated
               Shape Representation},
  journal    = {arXiv preprint arXiv:2104.07645 },
  year      = {2021},
}

Data preparation and layout

Please 1) download dataset and put data in the data directory, and 2) download checkpoints and put the checkpoint in the corresponding example/ directory, e.g. it should look like examples/laptop/laptop-asdf/Model_Parameters/1000.pth.

The dataset is structured as follows, can be, e.g. shape2motion/shape2motion-1-view/shape2motion-2-view/rbo :

data/
    SdfSamples/
        
   
    /
            
    
     /
                
     
      .npz
    SurfaceSamples/
        
      
       /
            
       
        / 
        
         .ply NormalizationParameters/ 
         
          / 
          
           / 
           
            .ply 
           
          
         
        
       
      
     
    
   

Splits of train/test files are stored in a simple JSON format. For examples, see examples/splits/.

How to Use A-SDF

Use the class laptop as illustration. Feel free to change to stapler/washing_machine/door/oven/eyeglasses/refrigerator for exploring other categories.

(a) Train a model

To train a model, run

python train.py -e examples/laptop/laptop-asdf/

(b) Reconstruction

To use a trained model to reconstruct explicit mesh representations of shapes from the test set, run the follow scripts, where -m recon_testset_ttt for inference with test-time adaptation and -m recon_testset otherwise.

python test.py -e examples/laptop/laptop-asdf/ -c 1000 -m recon_testset_ttt

To compute the chamfer distance, run:

python eval.py -e examples/laptop/laptop-asdf/ -c 1000 -m recon_testset_ttt

(c) Generation

To use a trained model to genrate explicit mesh of unseen articulations (specified in asdf/asdf_reconstruct.py) of shapes from the test set, run the follow scripts. Note that -m mode should be consistent with the one for reconstruction: -m generation_ttt for inference with test-time adaptation and -m generation otherwise.

python test.py -e examples/laptop/laptop-asdf/ -c 1000 -m generation_ttt
python eval.py -e examples/laptop/laptop-asdf/ -c 1000 -m generation_ttt

(d) Interpolation

To use a trained model to interpolate explicit mesh of unseen articulations (specified in asdf/asdf_reconstruct.py) of shapes from the test set, run:

python test.py -e examples/laptop/laptop-asdf/ -c 1000 -m inter_testset
python eval.py -e examples/laptop/laptop-asdf/ -c 1000 -m inter_testset

(e) Partial Point Cloud

To use a trained model to reconstruct and generate explicit meshes from partial pointcloud: (1) download the partial point clouds dataset laptop-1/2-view-0.025.zip from dataset first and (2) put the laptop checkpoint trained on shape2motion in examples/laptop/laptop-asdf-1/2-view/, (3) then run the following scripts, where --dataset shape2motion-1-view for partial point clouds generated from a single depth image and --dataset shape2motion-2-view for the case generated from two depth images of different view points, -m can be one of recon_testset/recon_testset_ttt/generation/generation_ttt, similar to previous experiments.

python test.py -e examples/laptop/laptop-asdf-1-view/ -c 1000 -m recon_testset_ttt/generation_ttt --dataset shape2motion-1-view
python eval.py -e examples/laptop/laptop-asdf-1-view/ -c 1000 -m recon_testset_ttt/generation_ttt

(f) RBO dataset

To test a model on the rbo dataset: (1) download the generated partial point clouds of the laptop class from the rbo dataset --- rbo_laptop_release_test.zip from dataset first, (2) put the laptop checkpoint trained on shape2motion in examples/laptop/laptop-asdf-rbo/, (3) then run the following,

python test.py -e examples/laptop/laptop-asdf-rbo/ -m recon_testset_ttt/generation_ttt -c 1000 --dataset rbo
python eval_rbo.py -e examples/laptop/laptop-asdf-rbo/ -m recon_testset_ttt/generation_ttt -c 1000

Dataset generation details are included in the 'dataset_generation/rbo'.

Data Generation

Stay tuned. We follow (1) ANSCH to create URDF for shape2motion dataset,(2) Manifold to create watertight meshes, (3) and modified mesh_to_sdf for generating sampled points and sdf values.

Acknowledgement

The code is heavily based on Jeong Joon Park's DeepSDF from facebook.

Owner
Ph.D. student
"Graph Neural Controlled Differential Equations for Traffic Forecasting", AAAI 2022

Graph Neural Controlled Differential Equations for Traffic Forecasting Setup Python environment for STG-NCDE Install python environment $ conda env cr

Jeongwhan Choi 55 Dec 28, 2022
A python bot to move your mouse every few seconds to appear active on Skype, Teams or Zoom as you go AFK. 🐭 πŸ€–

PyMouseBot If you're from GT and annoyed with SGVPN idle timeouts while working on development laptop, You might find this useful. A python cli bot to

Oaker Min 6 Oct 24, 2022
Building Ellee β€” A GPT-3 and Computer Vision Powered Talking Robotic Teddy Bear With Human Level Conversation Intelligence

Using an object detection and facial recognition system built on MobileNetSSDV2 and Dlib and running on an NVIDIA Jetson Nano, a GPT-3 model, Google Speech Recognition, Amazon Polly and servo motors,

24 Oct 26, 2022
torchsummaryDynamic: support real FLOPs calculation of dynamic network or user-custom PyTorch ops

torchsummaryDynamic Improved tool of torchsummaryX. torchsummaryDynamic support real FLOPs calculation of dynamic network or user-custom PyTorch ops.

Bohong Chen 1 Jan 07, 2022
Activity tragle - Google is tracking everything, we just look at it

activity_tragle Google is tracking everything, we just look at it here. You need

BERNARD Guillaume 1 Feb 15, 2022
You Only πŸ‘€ One Sequence

You Only πŸ‘€ One Sequence TL;DR: We study the transferability of the vanilla ViT pre-trained on mid-sized ImageNet-1k to the more challenging COCO obje

Hust Visual Learning Team 666 Jan 03, 2023
PyTorch implementation of paper: AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer, ICCV 2021.

AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer [Paper] [PyTorch Implementation] [Paddle Implementation] Overview This reposit

148 Dec 30, 2022
Simulation of self-focusing of laser beams in condensed media

What is it? Program for scientific research, which allows to simulate the phenomenon of self-focusing of different laser beams (including Gaussian, ri

Evgeny Vasilyev 13 Dec 24, 2022
OrienMask: Real-time Instance Segmentation with Discriminative Orientation Maps

OrienMask This repository implements the framework OrienMask for real-time instance segmentation. It achieves 34.8 mask AP on COCO test-dev at the spe

45 Dec 13, 2022
PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper.

deep-linear-shapes PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper. If you find this code useful i

Romain Loiseau 27 Sep 24, 2022
3D-Transformer: Molecular Representation with Transformer in 3D Space

3D-Transformer: Molecular Representation with Transformer in 3D Space

55 Dec 19, 2022
PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020).

NHDRRNet-PyTorch This is the PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020). 0. Differences between Original Paper and

Yutong Zhang 1 Mar 01, 2022
Pansharpening by convolutional neural networks in the full resolution framework

Z-PNN: Zoom Pansharpening Neural Network Pansharpening by convolutional neural networks in the full resolution framework is a deep learning method for

20 Nov 24, 2022
Real-time analysis of intracranial neurophysiology recordings.

py_neuromodulation Click this button to run the "Tutorial ML with py_neuro" notebooks: The py_neuromodulation toolbox allows for real time capable pro

Interventional Cognitive Neuromodulation - Neumann Lab Berlin 15 Nov 03, 2022
Official Code Implementation of the paper : XAI for Transformers: Better Explanations through Conservative Propagation

Official Code Implementation of The Paper : XAI for Transformers: Better Explanations through Conservative Propagation For the SST-2 and IMDB expermin

Ameen Ali 23 Dec 30, 2022
NNR conformation conditional and global probabilities estimation and analysis in peptides or proteins fragments

NNR and global probabilities estimation and analysis in peptides or protein fragments This module calculates global and NNR conformation dependent pro

0 Jul 15, 2021
Fast, accurate and reliable software for algebraic CT reconstruction

KCT CBCT Fast, accurate and reliable software for algebraic CT reconstruction. This set of software tools includes OpenCL implementation of modern CT

VojtΔ›ch Kulvait 4 Dec 14, 2022
Code for Max-Margin Contrastive Learning - AAAI 2022

Max-Margin Contrastive Learning This is a pytorch implementation for the paper Max-Margin Contrastive Learning accepted to AAAI 2022. This repository

Anshul Shah 12 Oct 22, 2022
Semi-supervised Implicit Scene Completion from Sparse LiDAR

Semi-supervised Implicit Scene Completion from Sparse LiDAR Paper Created by Pengfei Li, Yongliang Shi, Tianyu Liu, Hao Zhao, Guyue Zhou and YA-QIN ZH

114 Nov 30, 2022
AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning

AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning (NeurIPS 2020) Introduction AdaShare is a novel and differentiable approach fo

94 Dec 22, 2022