PyTorch implementation for paper Neural Marching Cubes.

Related tags

Deep LearningNMC
Overview

NMC

PyTorch implementation for paper Neural Marching Cubes, Zhiqin Chen, Hao Zhang.

Paper | Supplementary Material (to be updated)

Citation

If you find our work useful in your research, please consider citing:

@article{chen2021nmc,
  title={Neural Marching Cubes},
  author={Zhiqin Chen and Hao Zhang},
  journal={arXiv preprint arXiv:2106.11272},
  year={2021}
}

Notice

We have implemented Neural Dual Contouring (NDC). NDC is based on Dual Contouring and thus much easier to implement than NMC. It produces less triangles and vertices (1/8 of NMC, 1/4 of NMC-lite, ≈MC33), with better triangle quality. It runs faster than NMC because it has significantly less values to predict for each cube (1 bool 3 float for NDC, v.s. 5 bool 51 float for NMC), therefore the network size could be significantly reduced. Yet, it cannot reconstruct some cube cases, and may introduce non-manifold edges.

Requirements

  • Python 3 with numpy, h5py, scipy and Cython
  • PyTorch 1.8 (other versions may also work)

Build Cython module:

python setup.py build_ext --inplace

Datasets and pre-trained weights

For data preparation, please see data_preprocessing.

We provide the ready-to-use datasets here.

Backup links:

We also provide the pre-trained network weights.

Backup links:

Note that the weights are divided into six folders:

Folder Method Input
1_NMC_sdf_unit_scale NMC SDF grid, each grid cell must have unit length
2_NMC_lite_sdf_unit_scale NMC-lite SDF grid, each grid cell must have unit length
3_NMC_voxel NMC Voxel grid, 1=occupied, 0=otherwise
4_NMC_lite_voxel NMC-lite Voxel grid, 1=occupied, 0=otherwise
5_NMC_sdf_scale_0.001-2 NMC SDF grid, each grid cell could have length from 0.001 to 2.0
6_NMC_lite_sdf_scale_0.001-2 NMC-lite SDF grid, each grid cell could have length from 0.001 to 2.0
This GitHub repo NMC = 5_NMC_sdf_scale_0.001-2

Training and Testing

Before training, please replace LUT_tess.npz (the Look-Up Table for cube tessellations) in the main directory with the corresponding version of your training target (either NMC or NMC-lite). Both versions of LUT_tess.npz can be found at tessellation.

To train/test NMC with SDF input:

python main.py --train_bool --epoch 400 --data_dir groundtruth/gt_NMC --input_type sdf
python main.py --train_float --epoch 400 --data_dir groundtruth/gt_NMC --input_type sdf
python main.py --test_bool_float --data_dir groundtruth/gt_NMC --input_type sdf

To train/test NMC-lite with SDF input:

python main.py --train_bool --epoch 400 --data_dir groundtruth/gt_simplified --input_type sdf
python main.py --train_float --epoch 400 --data_dir groundtruth/gt_simplified --input_type sdf
python main.py --test_bool_float --data_dir groundtruth/gt_simplified --input_type sdf

To train/test NMC with voxel input:

python main.py --train_bool --epoch 200 --data_dir groundtruth/gt_NMC --input_type voxel
python main.py --train_float --epoch 100 --data_dir groundtruth/gt_NMC --input_type voxel
python main.py --test_bool_float --data_dir groundtruth/gt_NMC --input_type voxel

To train/test NMC-lite with voxel input:

python main.py --train_bool --epoch 200 --data_dir groundtruth/gt_simplified --input_type voxel
python main.py --train_float --epoch 100 --data_dir groundtruth/gt_simplified --input_type voxel
python main.py --test_bool_float --data_dir groundtruth/gt_simplified --input_type voxel

To evaluate Chamfer Distance, Normal Consistency, F-score, Edge Chamfer Distance, Edge F-score, you need to have the ground truth normalized obj files ready in a folder objs. See data_preprocessing for how to prepare the obj files. Then you can run:

python eval_cd_nc_f1_ecd_ef1.py

To count the number of triangles and vertices, run:

python eval_v_t_count.py

If you want to test on your own dataset, please refer to data_preprocessing for how to convert obj files into SDF grids and voxel grids. If your data are not meshes (say your data are already voxel grids), you can modify the code in utils.py to read your own data format. Check function read_data_input_only in utils.py for an example.

Owner
Zhiqin Chen
Video game addict.
Zhiqin Chen
An Industrial Grade Federated Learning Framework

DOC | Quick Start | 中文 FATE (Federated AI Technology Enabler) is an open-source project initiated by Webank's AI Department to provide a secure comput

Federated AI Ecosystem 4.8k Jan 09, 2023
Plover-tapey-tape: an alternative to Plover’s built-in paper tape

plover-tapey-tape plover-tapey-tape is an alternative to Plover’s built-in paper

7 May 29, 2022
A diff tool for language models

LMdiff Qualitative comparison of large language models. Demo & Paper: http://lmdiff.net LMdiff is a MIT-IBM Watson AI Lab collaboration between: Hendr

Hendrik Strobelt 27 Dec 29, 2022
Exposure Time Calculator (ETC) and radial velocity precision estimator for the Near InfraRed Planet Searcher (NIRPS) spectrograph

NIRPS-ETC Exposure Time Calculator (ETC) and radial velocity precision estimator for the Near InfraRed Planet Searcher (NIRPS) spectrograph February 2

Nolan Grieves 2 Sep 15, 2022
Code for Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data? (SDM 2022)

Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data? (SDM 2022) We consider how a user of a web servi

joisino 20 Aug 21, 2022
OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.

English | 简体中文 Documentation: https://mmtracking.readthedocs.io/ Introduction MMTracking is an open source video perception toolbox based on PyTorch.

OpenMMLab 2.7k Jan 08, 2023
Spectralformer: Rethinking hyperspectral image classification with transformers

Spectralformer: Rethinking hyperspectral image classification with transformers Danfeng Hong, Zhu Han, Jing Yao, Lianru Gao, Bing Zhang, Antonio Plaza

Danfeng Hong 102 Dec 29, 2022
TensorLight - A high-level framework for TensorFlow

TensorLight is a high-level framework for TensorFlow-based machine intelligence applications. It reduces boilerplate code and enables advanced feature

Benjamin Kan 10 Jul 31, 2022
Proximal Backpropagation - a neural network training algorithm that takes implicit instead of explicit gradient steps

Proximal Backpropagation Proximal Backpropagation (ProxProp) is a neural network training algorithm that takes implicit instead of explicit gradient s

Thomas Frerix 40 Dec 17, 2022
Pytorch implementation of Implicit Behavior Cloning.

Implicit Behavior Cloning - PyTorch (wip) Pytorch implementation of Implicit Behavior Cloning. Install conda create -n ibc python=3.8 pip install -r r

Kevin Zakka 49 Dec 25, 2022
Denoising Diffusion Probabilistic Models

Denoising Diffusion Probabilistic Models This repo contains code for DDPM training. Based on Denoising Diffusion Probabilistic Models, Improved Denois

Alexander Markov 7 Dec 15, 2022
Build and run Docker containers leveraging NVIDIA GPUs

NVIDIA Container Toolkit Introduction The NVIDIA Container Toolkit allows users to build and run GPU accelerated Docker containers. The toolkit includ

NVIDIA Corporation 15.6k Jan 01, 2023
Project of 'TBEFN: A Two-branch Exposure-fusion Network for Low-light Image Enhancement '

TBEFN: A Two-branch Exposure-fusion Network for Low-light Image Enhancement Codes for TMM20 paper "TBEFN: A Two-branch Exposure-fusion Network for Low

KUN LU 31 Nov 06, 2022
GNN-based Recommendation Benchmark

GRecX A Fair Benchmark for GNN-based Recommendation Homepage and Documentation Homepage: Documentation: Paper: GRecX: An Efficient and Unified Benchma

73 Oct 17, 2022
Code for "Layered Neural Rendering for Retiming People in Video."

Layered Neural Rendering in PyTorch This repository contains training code for the examples in the SIGGRAPH Asia 2020 paper "Layered Neural Rendering

Google 154 Dec 16, 2022
Code for visualizing the loss landscape of neural nets

Visualizing the Loss Landscape of Neural Nets This repository contains the PyTorch code for the paper Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer

Tom Goldstein 2.2k Jan 09, 2023
Neuralnetwork - Basic Multilayer Perceptron Neural Network for deep learning

Neural Network Just a basic Neural Network module Usage Example Importing Module

andreecy 0 Nov 01, 2022
style mixing for animation face

An implementation of StyleGAN on Animation dataset. Install git clone https://github.com/MorvanZhou/anime-StyleGAN cd anime-StyleGAN pip install -r re

Morvan 46 Nov 30, 2022
Image Fusion Transformer

Image-Fusion-Transformer Platform Python 3.7 Pytorch =1.0 Training Dataset MS-COCO 2014 (T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ram

Vibashan VS 68 Dec 23, 2022