This repository contains code to train and render Mixture of Volumetric Primitives (MVP) models

Related tags

Deep Learningmvp
Overview

Mixture of Volumetric Primitives -- Training and Evaluation

This repository contains code to train and render Mixture of Volumetric Primitives (MVP) models.

If you use Mixture of Volumetric Primitives in your research, please cite:
Mixture of Volumetric Primitives for Efficient Neural Rendering
Stephen Lombardi, Tomas Simon, Gabriel Schwartz, Michael Zollhoefer, Yaser Sheikh, Jason Saragih
ACM Transactions on Graphics (SIGGRAPH 2021) 40, 4. Article 59

@article{Lombardi21,
author = {Lombardi, Stephen and Simon, Tomas and Schwartz, Gabriel and Zollhoefer, Michael and Sheikh, Yaser and Saragih, Jason},
title = {Mixture of Volumetric Primitives for Efficient Neural Rendering},
year = {2021},
issue_date = {August 2021},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {40},
number = {4},
issn = {0730-0301},
url = {https://doi.org/10.1145/3450626.3459863},
doi = {10.1145/3450626.3459863},
journal = {ACM Trans. Graph.},
month = {jul},
articleno = {59},
numpages = {13},
keywords = {neural rendering}
}

Requirements

  • Python (3.8+)
    • PyTorch
    • NumPy
    • SciPy
    • Pillow
    • OpenCV
  • ffmpeg (in $PATH to render videos)
  • CUDA 10 or higher

Building

The repository contains two CUDA PyTorch extensions. To build, cd to each directory and use make:

cd extensions/mvpraymarcher
make
cd -
cd extensions/utils
make

How to Use

There are two main scripts in the root directory: train.py and render.py. The scripts take a configuration file for the experiment that defines the dataset used and the options for the model (e.g., the type of decoder that is used).

Download the latest release on Github to get the experiments directory.

To train the model:

python train.py experiments/dryice1/experiment1/config.py

To render a video of a trained model:

python render.py experiments/dryice1/experiment1/config.py

See ARCHITECTURE.md for more details.

Training Data

See the latest Github release for data.

Using your own Data

Implement your own Dataset class to return images and camera parameters. An example is given in data.multiviewvideo. A dataset class will need to return camera pose parameters, image data, and tracked mesh data.

How to Extend

See ARCHITECTURE.md

License

See the LICENSE file for details.

Comments
  • ModuleNotFoundError: No module named 'utilslib'

    ModuleNotFoundError: No module named 'utilslib'

    Hi , thanks for share with this awsome job , have nice day:-) When I just want to render the demo in the experiment data ,I got this error :-) By the way ,I have compiled the extension file 2022-07-06 23-31-21 的屏幕截图

    opened by Myzhencai 9
  • build success, but cannot run

    build success, but cannot run

    Traceback (most recent call last):
      File "render.py", line 118, in <module>
        output, _ = ae(
      File "/home/an/anaconda3/envs/py38-t19/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/an/an_project/mvp/models/volumetric.py", line 286, in forward
        rayrgba, rmlosses = self.raymarcher(raypos, raydir, tminmax,
      File "/home/an/anaconda3/envs/py38-t19/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/an/an_project/mvp/models/raymarchers/mvpraymarcher.py", line 32, in forward
        rayrgba = mvpraymarch(raypos, raydir, dt, tminmax,
      File "/home/an/an_project/mvp/extensions/mvpraymarch/mvpraymarch.py", line 273, in mvpraymarch
        out = MVPRaymarch.apply(raypos, raydir, stepsize, tminmax,
      File "/home/an/an_project/mvp/extensions/mvpraymarch/mvpraymarch.py", line 119, in forward
        sortedobjid, nodechildren, nodeaabb = build_accel(primtransfin,
      File "/home/an/an_project/mvp/extensions/mvpraymarch/mvpraymarch.py", line 44, in build_accel
        sortedobjid = (torch.arange(N*K, dtype=torch.int32, device=dev) % K).view(N, K)
    RuntimeError: CUDA error: no kernel image is available for execution on the device
    
    

    use: python 3.8.13 pytorch 1.13.0a0+git4503c45 cuda 11.3.0 gcc version 8.4.0


    Hi, are you having similar issues?
    pytorch is working normally.

    opened by AN-ZE 8
  • What's basetransf matrix used for?

    What's basetransf matrix used for?

    Hi, I have a little question when I apply my own data using this code. What's the self.basetransf matrix used for as in multiviewvideo.py do? I find this 3x4 matrix is applied for all camera poses and all the frametransf, but wht's the purpose for it? :)

    https://github.com/facebookresearch/mvp/blob/d758f53662e79d7fec885f4dd1a3ee457f7c4b00/data/multiviewvideo.py#L410-L415

    https://github.com/facebookresearch/mvp/blob/d758f53662e79d7fec885f4dd1a3ee457f7c4b00/data/multiviewvideo.py#L385-L387

    Besides, I find it necessary to apply this basetransf, because when I change it to an eyes matrix, it didn't converge during training. So how to get my own basetransf?

    Your answer will help me a lot! Thank you!

    opened by Qingcsai 5
  • Background image uesed in Lombardi‘s MVP not be found in multiface dataset

    Background image uesed in Lombardi‘s MVP not be found in multiface dataset

    Multiface dataset has been used in "Mixture of Volumetric Primitives for Efficient Neural Rendering". The "mvp" onfig file needs the path to the background image. But I can't find the background image in multiface dataset. The code in config file is: bgpath = os.path.join(imagepathbase, 'bg', image', 'cam{cam}', 'image0000.png').

    opened by shuishiwojiade 2
  • Can not build the cuda pytorch extension

    Can not build the cuda pytorch extension

    May I know what pytorch version and cuda version are used to build the two CUDA PyTorch extensions? I was using pytorch 1.7.1, cuda 10.1 and gcc 5.5.0, but I got the error "torch/utils/cpp_extension.py", line 445, in unix_wrap_ninja_compile post_cflags = extra_postargs['cxx'] KeyError: 'cxx' when I tried to build the two CUDA PyTorch extensions. Any suggestions on how to solve the problem?

    opened by ZhaoyangLyu 2
  • Bug Report: mvp/extensions/mvpraymarch/bvh.cu error: too many initializer values

    Bug Report: mvp/extensions/mvpraymarch/bvh.cu error: too many initializer values

    Hi, I meet a bug after cd to extensions/mvpraymarch and run "make" command. Could someone kindly help me to solve the problem?

    I use a remote server with: gcc (Ubuntu 7.5.0-3ubuntu1~16.04) 7.5.0, g++ (Ubuntu 7.5.0-3ubuntu1~16.04) 7.5.0, GNU Make 4.1, Ubuntu 16.04.7 LTS, Python 3.9.12, nvcc 10.2.

    The bug is from pointer assignment as shown in the screenshot. K

    Below is the output message after run "make" command in extensions/mvpraymarch directory:

    python setup.py build_ext --inplace CUDA_HOME: /data/hzhangcc/cuda-10.2 CUDNN_HOME: None running build_ext building 'mvpraymarchlib' extension Emitting ninja build file /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/2] /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/bvh.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 FAILED: /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/bvh.o /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/bvh.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu(214): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu(272): error: too many initializer values

    2 errors detected in the compilation of "/tmp/tmpxft_00003cf1_00000000-6_bvh.cpp1.ii". [2/2] /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/mvpraymarch_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 FAILED: /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/mvpraymarch_kernel.o /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/mvpraymarch_kernel.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(80): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(87): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(93): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(99): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(167): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(174): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(180): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(186): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_subset_kernel.h(30): warning: variable "validthread" was declared but never referenced

    8 errors detected in the compilation of "/tmp/tmpxft_00003cf2_00000000-6_mvpraymarch_kernel.cpp1.ii". ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1814, in _run_ninja_build subprocess.run( File "/data/hzhangcc/anaconda3/lib/python3.9/subprocess.py", line 528, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last): File "/data/hzhangcc/mvp/extensions/mvpraymarch/setup.py", line 13, in setup( File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/init.py", line 87, in setup return distutils.core.setup(**attrs) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 148, in setup return run_commands(dist) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 163, in run_commands dist.run_commands() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands self.run_command(cmd) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/dist.py", line 1214, in run_command super().run_command(command) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command cmd_obj.run() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run _build_ext.build_ext.run(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 339, in run self.build_extensions() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 771, in build_extensions build_ext.build_extensions(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions _build_ext.build_ext.build_extensions(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 448, in build_extensions self._build_extensions_serial() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 473, in _build_extensions_serial self.build_extension(ext) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 202, in build_extension _build_ext.build_extension(self, ext) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 528, in build_extension objects = self.compiler.compile(sources, File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 592, in unix_wrap_ninja_compile _write_ninja_file_and_compile_objects( File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1493, in _write_ninja_file_and_compile_objects _run_ninja_build( File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1830, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error compiling objects for extension makefile:2: recipe for target 'all' failed make: *** [all] Error 1

    opened by Wushanfangniuwa 1
  • Traning data for MVP

    Traning data for MVP

    Hi, thank you for sharing your wonderful work! I was able to train the Neural Volume data. I was wondering if the training data for MVP will be released in the near future. Thank you!

    opened by weilunhuang-jhu 1
  • miss files of example?

    miss files of example?

    Hi, I notice there are some discrepancies between the latest release and README.md and mvp/ARCHITECTURE.md. Are there some files of example missing? Such as 'experiments/dryice1/experiment1/config.py' metioned in README.md or 'experiments/example/config.py' metioned in mvp/ARCHITECTURE.md? I follow the steps in README.md but can not train or render the model.

    opened by faneggs 1
  • Training time

    Training time

    Awesome Work! I have been a huge fan of your works since Neural Volumes. This work also seems very interesting!

    How long does it take for training?

    Thank you!

    opened by yeong5366 1
  • render .py not exporting the images for

    render .py not exporting the images for "render_rotate.mp4"

    Hi, thank you for sharing your great work! I'm trying it work the repository's code of the "neuralvolumes" and "mvp" of your great works with each experiments data you provided. "neuralvolumes" worked well! Thank you!! I was able to exec train.py and render.py of the "NeuralVolume" code of your previous work,and could get the images sequence of "prog_XXXXXX.jpg" and "render_rotate.mp4" movie file. Then I'm trying it work "mvp" at the same emvironment,and I did success to work train.py and could get the images sequence of "prog_XXXXXX.jpg".And "log.txt","optimparams.pt","aeparams.pt" file too. But I'm trying exec render.py,the image sequence files of consisting for "render_rotate.mp4" are not exported at /tmp/xxxxxxxxxx/ directory.

    The error message is [image2 @ 0x56400177b780] Could find no file with path '/tmp/5613023327/%06d.png' and index in the range 0-4 /tmp/5613023327/%06d.png: No such file or directory

    Do you have any information about this error?

    My environment is ubuntu20.04LTS, Python 3.8.13, GCC 9.4.0, PyTorch 1.10.1+cu113,GPU_A6000. The setup.py has been fixed about cuda arch at the mvpraymarch and the utils directory.

    opened by ppponpon 3
  • Understanding about the opacity fade factor

    Understanding about the opacity fade factor

    Thanks for sharing this awsome job. I have read the paper. And I have some questions about the opacity fade factor.

    1. Due to the fade factor, the opacity downsample at the volume edge. Does it make the primitives scale become bigger to cover the scene?(maybe opposite to the Volume Minimization Prior)
    2. Is there any experiments to ensure the parameters $\alpha$ and $\beta$?
    3. Does the fade factor make the center of the primitives more close to high occupancy point?
    4. And what is the relationship between stylegan2 and fade factor?

    And is there any suggestions to understand the cuda code of raymarching? I haven't do anything rely to cuda parallel.

    opened by LSQsjtu 0
  •  How you can reconstruct the mesh from images with different views

    How you can reconstruct the mesh from images with different views

    Hi, I noticed that the multi-view images of different frames of the same expression with the same id in your dataset have fixed camera parameters when reconstructing the mesh. When I try to perform mesh reconstruction from multi-view images, the camera parameters obtained from multi-view image reconstruction from different frames are all different. I was wondering how you fixed the camera parameters for mesh reconstruction.

    opened by LiTian0215 1
  • Cannot replicate experiments/neuralvolumes results: Completely vague output and vanishing kldiv

    Cannot replicate experiments/neuralvolumes results: Completely vague output and vanishing kldiv

    Hi, could someone kindly help me? I cannot replicate the output results in experiments/neuralvolumes. I have successfully built the extension and download the experiment.zip in the latest release. However, the output images are completely vague. I found that my kldiv term is quickly vanishing to 0, while in the example log.txt file given in experiment.zip, the kldiv terms remain larger than 0.3.

    Below are my outputs after 79579, 92139, and 106682 iterations. prog_079579 prog_092139 prog_106682

    Here I append my log.txt file which contains my configuration information and training statistics: log.txt

    I'm not sure where is the problem. Incorrect camera pose? Or the code in this repository has some bugs. Really hope some kind guy could help me. Thanks! :)

    opened by Wushanfangniuwa 2
  • About how is the tracking mesh built?

    About how is the tracking mesh built?

    Hello, I have a new problem: I am trying to use the video data of my head to reconstruct mesh and texture as the input and supervision of the network. I am trying to use colmap to reconstruct the point cloud and retargeting it to a public model with 7306 vertices; But the effect is very poor. Is there any method to build mesh for reference?

    opened by Luh1124 6
Releases(v0.1)
  • v0.1(Jan 6, 2022)

    This is an initial release of the Mixture of Volumetric Primitives code. This release includes code for training, rendering, and evaluation. Bundled with this release is a pretrained MVP model. Note that training data for MVP is not included with this release, but will be released in the future. To give an example of how to use the training code, training data and a training configuration file for Neural Volumes is included.

    Source code(tar.gz)
    Source code(zip)
    experiments.zip(1069.70 MB)
Owner
Meta Research
Meta Research
An implementation of the Contrast Predictive Coding (CPC) method to train audio features in an unsupervised fashion.

CPC_audio This code implements the Contrast Predictive Coding algorithm on audio data, as described in the paper Unsupervised Pretraining Transfers we

8 Nov 14, 2022
Code for training and evaluation of the model from "Language Generation with Recurrent Generative Adversarial Networks without Pre-training"

Language Generation with Recurrent Generative Adversarial Networks without Pre-training Code for training and evaluation of the model from "Language G

Amir Bar 253 Sep 14, 2022
Implementation of ECCV20 paper: the devil is in classification: a simple framework for long-tail object detection and instance segmentation

Implementation of our ECCV 2020 paper The Devil is in Classification: A Simple Framework for Long-tail Instance Segmentation This repo contains code o

twang 98 Sep 17, 2022
OptNet: Differentiable Optimization as a Layer in Neural Networks

OptNet: Differentiable Optimization as a Layer in Neural Networks This repository is by Brandon Amos and J. Zico Kolter and contains the PyTorch sourc

CMU Locus Lab 428 Dec 24, 2022
🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐

🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐

xmu-xiaoma66 7.7k Jan 05, 2023
Intelligent Video Analytics toolkit based on different inference backends.

English | 中文 OpenIVA OpenIVA is an end-to-end intelligent video analytics development toolkit based on different inference backends, designed to help

Quantum Liu 15 Oct 27, 2022
This is a project based on retinaface face detection, including ghostnet and mobilenetv3

English | 简体中文 RetinaFace in PyTorch Chinese detailed blog:https://zhuanlan.zhihu.com/p/379730820 Face recognition with masks is still robust---------

pogg 59 Dec 21, 2022
Source code for "Interactive All-Hex Meshing via Cuboid Decomposition [SIGGRAPH Asia 2021]".

Interactive All-Hex Meshing via Cuboid Decomposition Video demonstration This repository contains an interactive software to the PolyCube-based hex-me

Lingxiao Li 131 Dec 05, 2022
Indoor Panorama Planar 3D Reconstruction via Divide and Conquer

HV-plane reconstruction from a single 360 image Code for our paper in CVPR 2021: Indoor Panorama Planar 3D Reconstruction via Divide and Conquer (pape

sunset 36 Jan 03, 2023
Code to reproduce the experiments from our NeurIPS 2021 paper " The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective"

Code To run: python runner.py new --save SAVE_NAME --data PATH_TO_DATA_DIR --dataset DATASET --model model_name [options] --n 1000 - train - t

Geoff Pleiss 5 Dec 12, 2022
naked is a Python tool which allows you to strip a model and only keep what matters for making predictions.

naked is a Python tool which allows you to strip a model and only keep what matters for making predictions. The result is a pure Python function with no third-party dependencies that you can simply c

Max Halford 24 Dec 20, 2022
Language-Driven Semantic Segmentation

Language-driven Semantic Segmentation (LSeg) The repo contains official PyTorch Implementation of paper Language-driven Semantic Segmentation. Authors

Intelligent Systems Lab Org 416 Jan 03, 2023
Neural Network Libraries

Neural Network Libraries Neural Network Libraries is a deep learning framework that is intended to be used for research, development and production. W

Sony 2.6k Dec 30, 2022
Laplace Redux -- Effortless Bayesian Deep Learning

Laplace Redux - Effortless Bayesian Deep Learning This repository contains the code to run the experiments for the paper Laplace Redux - Effortless Ba

Runa Eschenhagen 28 Dec 07, 2022
Scalable Attentive Sentence-Pair Modeling via Distilled Sentence Embedding (AAAI 2020) - PyTorch Implementation

Scalable Attentive Sentence-Pair Modeling via Distilled Sentence Embedding PyTorch implementation for the Scalable Attentive Sentence-Pair Modeling vi

Microsoft 25 Dec 02, 2022
I-BERT: Integer-only BERT Quantization

I-BERT: Integer-only BERT Quantization HuggingFace Implementation I-BERT is also available in the master branch of HuggingFace! Visit the following li

Sehoon Kim 139 Dec 27, 2022
LocUNet is a deep learning method to localize a UE based solely on the reported signal strengths from a set of BSs.

LocUNet LocUNet is a deep learning method to localize a UE based solely on the reported signal strengths from a set of BSs. The method utilizes accura

4 Oct 05, 2022
CIFAR-10_train-test - training and testing codes for dataset CIFAR-10

CIFAR-10_train-test - training and testing codes for dataset CIFAR-10

Frederick Wang 3 Apr 26, 2022
MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation

MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation This repo is the official implementation of "MHFormer: Multi-Hypothesis Transforme

Vegetabird 281 Jan 07, 2023
Official implementation of Rethinking Graph Neural Architecture Search from Message-passing (CVPR2021)

Rethinking Graph Neural Architecture Search from Message-passing Intro The GNAS can automatically learn better architecture with the optimal depth of

Shaofei Cai 48 Sep 30, 2022