A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021)

Related tags

Deep LearningA-SDF
Overview

A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021)

This repository contains the official implementation for A-SDF introduced in the following paper: A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021). The code is developed based on the Pytorch framework(1.6.0) with python 3.7.6. This repo includes training code and generated data from shape2motion.

A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021)
JitengMu, Weichao Qiu, Adam Kortylewski, Alan Yuille, Nuno Vasconcelos, Xiaolong Wang
ICCV 2021

The project page with more details is at https://jitengmu.github.io/A-SDF/.

Citation

If you find our code or method helpful, please use the following BibTex entry.

@article{mu2021asdf,
  author    = {Jiteng Mu and
               Weichao Qiu and
               Adam Kortylewski and
               Alan L. Yuille and
               Nuno Vasconcelos and
               Xiaolong Wang},
  title     = {{A-SDF:} Learning Disentangled Signed Distance Functions for Articulated
               Shape Representation},
  journal    = {arXiv preprint arXiv:2104.07645 },
  year      = {2021},
}

Data preparation and layout

Please 1) download dataset and put data in the data directory, and 2) download checkpoints and put the checkpoint in the corresponding example/ directory, e.g. it should look like examples/laptop/laptop-asdf/Model_Parameters/1000.pth.

The dataset is structured as follows, can be, e.g. shape2motion/shape2motion-1-view/shape2motion-2-view/rbo :

data/
    SdfSamples/
        
   
    /
            
    
     /
                
     
      .npz
    SurfaceSamples/
        
      
       /
            
       
        / 
        
         .ply NormalizationParameters/ 
         
          / 
          
           / 
           
            .ply 
           
          
         
        
       
      
     
    
   

Splits of train/test files are stored in a simple JSON format. For examples, see examples/splits/.

How to Use A-SDF

Use the class laptop as illustration. Feel free to change to stapler/washing_machine/door/oven/eyeglasses/refrigerator for exploring other categories.

(a) Train a model

To train a model, run

python train.py -e examples/laptop/laptop-asdf/

(b) Reconstruction

To use a trained model to reconstruct explicit mesh representations of shapes from the test set, run the follow scripts, where -m recon_testset_ttt for inference with test-time adaptation and -m recon_testset otherwise.

python test.py -e examples/laptop/laptop-asdf/ -c 1000 -m recon_testset_ttt

To compute the chamfer distance, run:

python eval.py -e examples/laptop/laptop-asdf/ -c 1000 -m recon_testset_ttt

(c) Generation

To use a trained model to genrate explicit mesh of unseen articulations (specified in asdf/asdf_reconstruct.py) of shapes from the test set, run the follow scripts. Note that -m mode should be consistent with the one for reconstruction: -m generation_ttt for inference with test-time adaptation and -m generation otherwise.

python test.py -e examples/laptop/laptop-asdf/ -c 1000 -m generation_ttt
python eval.py -e examples/laptop/laptop-asdf/ -c 1000 -m generation_ttt

(d) Interpolation

To use a trained model to interpolate explicit mesh of unseen articulations (specified in asdf/asdf_reconstruct.py) of shapes from the test set, run:

python test.py -e examples/laptop/laptop-asdf/ -c 1000 -m inter_testset
python eval.py -e examples/laptop/laptop-asdf/ -c 1000 -m inter_testset

(e) Partial Point Cloud

To use a trained model to reconstruct and generate explicit meshes from partial pointcloud: (1) download the partial point clouds dataset laptop-1/2-view-0.025.zip from dataset first and (2) put the laptop checkpoint trained on shape2motion in examples/laptop/laptop-asdf-1/2-view/, (3) then run the following scripts, where --dataset shape2motion-1-view for partial point clouds generated from a single depth image and --dataset shape2motion-2-view for the case generated from two depth images of different view points, -m can be one of recon_testset/recon_testset_ttt/generation/generation_ttt, similar to previous experiments.

python test.py -e examples/laptop/laptop-asdf-1-view/ -c 1000 -m recon_testset_ttt/generation_ttt --dataset shape2motion-1-view
python eval.py -e examples/laptop/laptop-asdf-1-view/ -c 1000 -m recon_testset_ttt/generation_ttt

(f) RBO dataset

To test a model on the rbo dataset: (1) download the generated partial point clouds of the laptop class from the rbo dataset --- rbo_laptop_release_test.zip from dataset first, (2) put the laptop checkpoint trained on shape2motion in examples/laptop/laptop-asdf-rbo/, (3) then run the following,

python test.py -e examples/laptop/laptop-asdf-rbo/ -m recon_testset_ttt/generation_ttt -c 1000 --dataset rbo
python eval_rbo.py -e examples/laptop/laptop-asdf-rbo/ -m recon_testset_ttt/generation_ttt -c 1000

Dataset generation details are included in the 'dataset_generation/rbo'.

Data Generation

Stay tuned. We follow (1) ANSCH to create URDF for shape2motion dataset,(2) Manifold to create watertight meshes, (3) and modified mesh_to_sdf for generating sampled points and sdf values.

Acknowledgement

The code is heavily based on Jeong Joon Park's DeepSDF from facebook.

Owner
Ph.D. student
A simplified framework and utilities for PyTorch

Here is Poutyne. Poutyne is a simplified framework for PyTorch and handles much of the boilerplating code needed to train neural networks. Use Poutyne

GRAAL/GRAIL 534 Dec 17, 2022
Tensorflow Implementation of Pixel Transposed Convolutional Networks (PixelTCN and PixelTCL)

Pixel Transposed Convolutional Networks Created by Hongyang Gao, Hao Yuan, Zhengyang Wang and Shuiwang Ji at Texas A&M University. Introduction Pixel

Hongyang Gao 95 Jul 24, 2022
A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.

Layer-wise Relevance Propagation (LRP) in PyTorch Basic unsupervised implementation of Layer-wise Relevance Propagation (Bach et al., Montavon et al.)

Kai Fabi 28 Dec 26, 2022
SCALoss: Side and Corner Aligned Loss for Bounding Box Regression (AAAI2022).

SCALoss PyTorch implementation of the paper "SCALoss: Side and Corner Aligned Loss for Bounding Box Regression" (AAAI 2022). Introduction IoU-based lo

TuZheng 20 Sep 07, 2022
Resources related to our paper "CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain"

CLIN-X (CLIN-X-ES) & (CLIN-X-EN) This repository holds the companion code for the system reported in the paper: "CLIN-X: pre-trained language models a

Bosch Research 4 Dec 05, 2022
Unified tracking framework with a single appearance model

Paper: Do different tracking tasks require different appearance model? [ArXiv] (comming soon) [Project Page] (comming soon) UniTrack is a simple and U

ZhongdaoWang 300 Dec 24, 2022
Simulation of self-focusing of laser beams in condensed media

What is it? Program for scientific research, which allows to simulate the phenomenon of self-focusing of different laser beams (including Gaussian, ri

Evgeny Vasilyev 13 Dec 24, 2022
Optimized primitives for collective multi-GPU communication

NCCL Optimized primitives for inter-GPU communication. Introduction NCCL (pronounced "Nickel") is a stand-alone library of standard communication rout

NVIDIA Corporation 2k Jan 09, 2023
Proto-RL: Reinforcement Learning with Prototypical Representations

Proto-RL: Reinforcement Learning with Prototypical Representations This is a PyTorch implementation of Proto-RL from Reinforcement Learning with Proto

Denis Yarats 74 Dec 06, 2022
Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.

WECHSEL Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. arXiv: https://arx

Institute of Computational Perception 45 Dec 29, 2022
Official repository for "Intriguing Properties of Vision Transformers" (2021)

Intriguing Properties of Vision Transformers Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, & Ming-Hsuan Yang P

Muzammal Naseer 155 Dec 27, 2022
This is the official implementation of Elaborative Rehearsal for Zero-shot Action Recognition (ICCV2021)

Elaborative Rehearsal for Zero-shot Action Recognition This is an official implementation of: Shizhe Chen and Dong Huang, Elaborative Rehearsal for Ze

DeLightCMU 26 Sep 24, 2022
Part-aware Measurement for Robust Multi-View Multi-Human 3D Pose Estimation and Tracking

Part-aware Measurement for Robust Multi-View Multi-Human 3D Pose Estimation and Tracking Part-Aware Measurement for Robust Multi-View Multi-Human 3D P

19 Oct 27, 2022
Pytorch implementation of the paper Improving Text-to-Image Synthesis Using Contrastive Learning

T2I_CL This is the official Pytorch implementation of the paper Improving Text-to-Image Synthesis Using Contrastive Learning Requirements Linux Python

42 Dec 31, 2022
Inverse Optimal Control Adapted to the Noise Characteristics of the Human Sensorimotor System

Inverse Optimal Control Adapted to the Noise Characteristics of the Human Sensorimotor System This repository contains code for the paper Schultheis,

2 Oct 28, 2022
[CIKM 2021] Enhancing Aspect-Based Sentiment Analysis with Supervised Contrastive Learning

Enhancing Aspect-Based Sentiment Analysis with Supervised Contrastive Learning. This repo contains the PyTorch code and implementation for the paper E

Akuchi 18 Dec 22, 2022
Pytorch Implementation of Neural Analysis and Synthesis: Reconstructing Speech from Self-Supervised Representations

NANSY: Unofficial Pytorch Implementation of Neural Analysis and Synthesis: Reconstructing Speech from Self-Supervised Representations Notice Papers' D

Dongho Choi 최동호 104 Dec 23, 2022
Pytorch Lightning code guideline for conferences

Deep learning project seed Use this seed to start new deep learning / ML projects. Built in setup.py Built in requirements Examples with MNIST Badges

Pytorch Lightning 1k Jan 02, 2023
Code for paper "Vocabulary Learning via Optimal Transport for Neural Machine Translation"

**Codebase and data are uploaded in progress. ** VOLT(-py) is a vocabulary learning codebase that allows researchers and developers to automaticaly ge

416 Jan 09, 2023
Lane assist for ETS2, built with the ultra-fast-lane-detection model.

Euro-Truck-Simulator-2-Lane-Assist Lane assist for ETS2, built with the ultra-fast-lane-detection model. This project was made possible by the amazing

36 Jan 05, 2023