This repository contains code to train and render Mixture of Volumetric Primitives (MVP) models

Related tags

Deep Learningmvp
Overview

Mixture of Volumetric Primitives -- Training and Evaluation

This repository contains code to train and render Mixture of Volumetric Primitives (MVP) models.

If you use Mixture of Volumetric Primitives in your research, please cite:
Mixture of Volumetric Primitives for Efficient Neural Rendering
Stephen Lombardi, Tomas Simon, Gabriel Schwartz, Michael Zollhoefer, Yaser Sheikh, Jason Saragih
ACM Transactions on Graphics (SIGGRAPH 2021) 40, 4. Article 59

@article{Lombardi21,
author = {Lombardi, Stephen and Simon, Tomas and Schwartz, Gabriel and Zollhoefer, Michael and Sheikh, Yaser and Saragih, Jason},
title = {Mixture of Volumetric Primitives for Efficient Neural Rendering},
year = {2021},
issue_date = {August 2021},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {40},
number = {4},
issn = {0730-0301},
url = {https://doi.org/10.1145/3450626.3459863},
doi = {10.1145/3450626.3459863},
journal = {ACM Trans. Graph.},
month = {jul},
articleno = {59},
numpages = {13},
keywords = {neural rendering}
}

Requirements

  • Python (3.8+)
    • PyTorch
    • NumPy
    • SciPy
    • Pillow
    • OpenCV
  • ffmpeg (in $PATH to render videos)
  • CUDA 10 or higher

Building

The repository contains two CUDA PyTorch extensions. To build, cd to each directory and use make:

cd extensions/mvpraymarcher
make
cd -
cd extensions/utils
make

How to Use

There are two main scripts in the root directory: train.py and render.py. The scripts take a configuration file for the experiment that defines the dataset used and the options for the model (e.g., the type of decoder that is used).

Download the latest release on Github to get the experiments directory.

To train the model:

python train.py experiments/dryice1/experiment1/config.py

To render a video of a trained model:

python render.py experiments/dryice1/experiment1/config.py

See ARCHITECTURE.md for more details.

Training Data

See the latest Github release for data.

Using your own Data

Implement your own Dataset class to return images and camera parameters. An example is given in data.multiviewvideo. A dataset class will need to return camera pose parameters, image data, and tracked mesh data.

How to Extend

See ARCHITECTURE.md

License

See the LICENSE file for details.

Comments
  • ModuleNotFoundError: No module named 'utilslib'

    ModuleNotFoundError: No module named 'utilslib'

    Hi , thanks for share with this awsome job , have nice day:-) When I just want to render the demo in the experiment data ,I got this error :-) By the way ,I have compiled the extension file 2022-07-06 23-31-21 的屏幕截图

    opened by Myzhencai 9
  • build success, but cannot run

    build success, but cannot run

    Traceback (most recent call last):
      File "render.py", line 118, in <module>
        output, _ = ae(
      File "/home/an/anaconda3/envs/py38-t19/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/an/an_project/mvp/models/volumetric.py", line 286, in forward
        rayrgba, rmlosses = self.raymarcher(raypos, raydir, tminmax,
      File "/home/an/anaconda3/envs/py38-t19/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/an/an_project/mvp/models/raymarchers/mvpraymarcher.py", line 32, in forward
        rayrgba = mvpraymarch(raypos, raydir, dt, tminmax,
      File "/home/an/an_project/mvp/extensions/mvpraymarch/mvpraymarch.py", line 273, in mvpraymarch
        out = MVPRaymarch.apply(raypos, raydir, stepsize, tminmax,
      File "/home/an/an_project/mvp/extensions/mvpraymarch/mvpraymarch.py", line 119, in forward
        sortedobjid, nodechildren, nodeaabb = build_accel(primtransfin,
      File "/home/an/an_project/mvp/extensions/mvpraymarch/mvpraymarch.py", line 44, in build_accel
        sortedobjid = (torch.arange(N*K, dtype=torch.int32, device=dev) % K).view(N, K)
    RuntimeError: CUDA error: no kernel image is available for execution on the device
    
    

    use: python 3.8.13 pytorch 1.13.0a0+git4503c45 cuda 11.3.0 gcc version 8.4.0


    Hi, are you having similar issues?
    pytorch is working normally.

    opened by AN-ZE 8
  • What's basetransf matrix used for?

    What's basetransf matrix used for?

    Hi, I have a little question when I apply my own data using this code. What's the self.basetransf matrix used for as in multiviewvideo.py do? I find this 3x4 matrix is applied for all camera poses and all the frametransf, but wht's the purpose for it? :)

    https://github.com/facebookresearch/mvp/blob/d758f53662e79d7fec885f4dd1a3ee457f7c4b00/data/multiviewvideo.py#L410-L415

    https://github.com/facebookresearch/mvp/blob/d758f53662e79d7fec885f4dd1a3ee457f7c4b00/data/multiviewvideo.py#L385-L387

    Besides, I find it necessary to apply this basetransf, because when I change it to an eyes matrix, it didn't converge during training. So how to get my own basetransf?

    Your answer will help me a lot! Thank you!

    opened by Qingcsai 5
  • Background image uesed in Lombardi‘s MVP not be found in multiface dataset

    Background image uesed in Lombardi‘s MVP not be found in multiface dataset

    Multiface dataset has been used in "Mixture of Volumetric Primitives for Efficient Neural Rendering". The "mvp" onfig file needs the path to the background image. But I can't find the background image in multiface dataset. The code in config file is: bgpath = os.path.join(imagepathbase, 'bg', image', 'cam{cam}', 'image0000.png').

    opened by shuishiwojiade 2
  • Can not build the cuda pytorch extension

    Can not build the cuda pytorch extension

    May I know what pytorch version and cuda version are used to build the two CUDA PyTorch extensions? I was using pytorch 1.7.1, cuda 10.1 and gcc 5.5.0, but I got the error "torch/utils/cpp_extension.py", line 445, in unix_wrap_ninja_compile post_cflags = extra_postargs['cxx'] KeyError: 'cxx' when I tried to build the two CUDA PyTorch extensions. Any suggestions on how to solve the problem?

    opened by ZhaoyangLyu 2
  • Bug Report: mvp/extensions/mvpraymarch/bvh.cu error: too many initializer values

    Bug Report: mvp/extensions/mvpraymarch/bvh.cu error: too many initializer values

    Hi, I meet a bug after cd to extensions/mvpraymarch and run "make" command. Could someone kindly help me to solve the problem?

    I use a remote server with: gcc (Ubuntu 7.5.0-3ubuntu1~16.04) 7.5.0, g++ (Ubuntu 7.5.0-3ubuntu1~16.04) 7.5.0, GNU Make 4.1, Ubuntu 16.04.7 LTS, Python 3.9.12, nvcc 10.2.

    The bug is from pointer assignment as shown in the screenshot. K

    Below is the output message after run "make" command in extensions/mvpraymarch directory:

    python setup.py build_ext --inplace CUDA_HOME: /data/hzhangcc/cuda-10.2 CUDNN_HOME: None running build_ext building 'mvpraymarchlib' extension Emitting ninja build file /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/2] /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/bvh.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 FAILED: /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/bvh.o /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/bvh.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu(214): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu(272): error: too many initializer values

    2 errors detected in the compilation of "/tmp/tmpxft_00003cf1_00000000-6_bvh.cpp1.ii". [2/2] /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/mvpraymarch_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 FAILED: /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/mvpraymarch_kernel.o /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/mvpraymarch_kernel.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(80): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(87): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(93): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(99): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(167): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(174): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(180): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(186): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_subset_kernel.h(30): warning: variable "validthread" was declared but never referenced

    8 errors detected in the compilation of "/tmp/tmpxft_00003cf2_00000000-6_mvpraymarch_kernel.cpp1.ii". ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1814, in _run_ninja_build subprocess.run( File "/data/hzhangcc/anaconda3/lib/python3.9/subprocess.py", line 528, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last): File "/data/hzhangcc/mvp/extensions/mvpraymarch/setup.py", line 13, in setup( File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/init.py", line 87, in setup return distutils.core.setup(**attrs) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 148, in setup return run_commands(dist) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 163, in run_commands dist.run_commands() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands self.run_command(cmd) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/dist.py", line 1214, in run_command super().run_command(command) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command cmd_obj.run() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run _build_ext.build_ext.run(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 339, in run self.build_extensions() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 771, in build_extensions build_ext.build_extensions(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions _build_ext.build_ext.build_extensions(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 448, in build_extensions self._build_extensions_serial() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 473, in _build_extensions_serial self.build_extension(ext) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 202, in build_extension _build_ext.build_extension(self, ext) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 528, in build_extension objects = self.compiler.compile(sources, File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 592, in unix_wrap_ninja_compile _write_ninja_file_and_compile_objects( File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1493, in _write_ninja_file_and_compile_objects _run_ninja_build( File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1830, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error compiling objects for extension makefile:2: recipe for target 'all' failed make: *** [all] Error 1

    opened by Wushanfangniuwa 1
  • Traning data for MVP

    Traning data for MVP

    Hi, thank you for sharing your wonderful work! I was able to train the Neural Volume data. I was wondering if the training data for MVP will be released in the near future. Thank you!

    opened by weilunhuang-jhu 1
  • miss files of example?

    miss files of example?

    Hi, I notice there are some discrepancies between the latest release and README.md and mvp/ARCHITECTURE.md. Are there some files of example missing? Such as 'experiments/dryice1/experiment1/config.py' metioned in README.md or 'experiments/example/config.py' metioned in mvp/ARCHITECTURE.md? I follow the steps in README.md but can not train or render the model.

    opened by faneggs 1
  • Training time

    Training time

    Awesome Work! I have been a huge fan of your works since Neural Volumes. This work also seems very interesting!

    How long does it take for training?

    Thank you!

    opened by yeong5366 1
  • render .py not exporting the images for

    render .py not exporting the images for "render_rotate.mp4"

    Hi, thank you for sharing your great work! I'm trying it work the repository's code of the "neuralvolumes" and "mvp" of your great works with each experiments data you provided. "neuralvolumes" worked well! Thank you!! I was able to exec train.py and render.py of the "NeuralVolume" code of your previous work,and could get the images sequence of "prog_XXXXXX.jpg" and "render_rotate.mp4" movie file. Then I'm trying it work "mvp" at the same emvironment,and I did success to work train.py and could get the images sequence of "prog_XXXXXX.jpg".And "log.txt","optimparams.pt","aeparams.pt" file too. But I'm trying exec render.py,the image sequence files of consisting for "render_rotate.mp4" are not exported at /tmp/xxxxxxxxxx/ directory.

    The error message is [image2 @ 0x56400177b780] Could find no file with path '/tmp/5613023327/%06d.png' and index in the range 0-4 /tmp/5613023327/%06d.png: No such file or directory

    Do you have any information about this error?

    My environment is ubuntu20.04LTS, Python 3.8.13, GCC 9.4.0, PyTorch 1.10.1+cu113,GPU_A6000. The setup.py has been fixed about cuda arch at the mvpraymarch and the utils directory.

    opened by ppponpon 3
  • Understanding about the opacity fade factor

    Understanding about the opacity fade factor

    Thanks for sharing this awsome job. I have read the paper. And I have some questions about the opacity fade factor.

    1. Due to the fade factor, the opacity downsample at the volume edge. Does it make the primitives scale become bigger to cover the scene?(maybe opposite to the Volume Minimization Prior)
    2. Is there any experiments to ensure the parameters $\alpha$ and $\beta$?
    3. Does the fade factor make the center of the primitives more close to high occupancy point?
    4. And what is the relationship between stylegan2 and fade factor?

    And is there any suggestions to understand the cuda code of raymarching? I haven't do anything rely to cuda parallel.

    opened by LSQsjtu 0
  •  How you can reconstruct the mesh from images with different views

    How you can reconstruct the mesh from images with different views

    Hi, I noticed that the multi-view images of different frames of the same expression with the same id in your dataset have fixed camera parameters when reconstructing the mesh. When I try to perform mesh reconstruction from multi-view images, the camera parameters obtained from multi-view image reconstruction from different frames are all different. I was wondering how you fixed the camera parameters for mesh reconstruction.

    opened by LiTian0215 1
  • Cannot replicate experiments/neuralvolumes results: Completely vague output and vanishing kldiv

    Cannot replicate experiments/neuralvolumes results: Completely vague output and vanishing kldiv

    Hi, could someone kindly help me? I cannot replicate the output results in experiments/neuralvolumes. I have successfully built the extension and download the experiment.zip in the latest release. However, the output images are completely vague. I found that my kldiv term is quickly vanishing to 0, while in the example log.txt file given in experiment.zip, the kldiv terms remain larger than 0.3.

    Below are my outputs after 79579, 92139, and 106682 iterations. prog_079579 prog_092139 prog_106682

    Here I append my log.txt file which contains my configuration information and training statistics: log.txt

    I'm not sure where is the problem. Incorrect camera pose? Or the code in this repository has some bugs. Really hope some kind guy could help me. Thanks! :)

    opened by Wushanfangniuwa 2
  • About how is the tracking mesh built?

    About how is the tracking mesh built?

    Hello, I have a new problem: I am trying to use the video data of my head to reconstruct mesh and texture as the input and supervision of the network. I am trying to use colmap to reconstruct the point cloud and retargeting it to a public model with 7306 vertices; But the effect is very poor. Is there any method to build mesh for reference?

    opened by Luh1124 6
Releases(v0.1)
  • v0.1(Jan 6, 2022)

    This is an initial release of the Mixture of Volumetric Primitives code. This release includes code for training, rendering, and evaluation. Bundled with this release is a pretrained MVP model. Note that training data for MVP is not included with this release, but will be released in the future. To give an example of how to use the training code, training data and a training configuration file for Neural Volumes is included.

    Source code(tar.gz)
    Source code(zip)
    experiments.zip(1069.70 MB)
Owner
Meta Research
Meta Research
Source code for Transformer-based Multi-task Learning for Disaster Tweet Categorisation (UCD's participation in TREC-IS 2020A, 2020B and 2021A).

Source code for "UCD participation in TREC-IS 2020A, 2020B and 2021A". *** update at: 2021/05/25 This repo so far relates to the following work: Trans

Congcong Wang 4 Oct 19, 2021
Official pytorch implementation of Active Learning for deep object detection via probabilistic modeling (ICCV 2021)

Active Learning for Deep Object Detection via Probabilistic Modeling This repository is the official PyTorch implementation of Active Learning for Dee

NVIDIA Research Projects 130 Jan 06, 2023
Pytorch Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic

Pytorch Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic [Paper] [Colab is coming soon] Approach Example Usage To r

170 Jan 03, 2023
[ICCV'21] NEAT: Neural Attention Fields for End-to-End Autonomous Driving

NEAT: Neural Attention Fields for End-to-End Autonomous Driving Paper | Supplementary | Video | Poster | Blog This repository is for the ICCV 2021 pap

254 Jan 02, 2023
Laplacian Score-regularized Concrete Autoencoders

Laplacian Score-regularized Concrete Autoencoders Requirements: torch = 1.9 scikit-learn = 0.24 omegaconf = 2.0.6 scipy = 1.6.0 matplotlib How to

JS 6 Dec 07, 2022
Adversarial Texture Optimization from RGB-D Scans (CVPR 2020).

AdversarialTexture Adversarial Texture Optimization from RGB-D Scans (CVPR 2020). Scanning Data Download Please refer to data directory for details. B

Jingwei Huang 153 Nov 28, 2022
Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021)

Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021) The implementation of Reducing Infromation Bottleneck for W

Jungbeom Lee 81 Dec 16, 2022
Implicit Deep Adaptive Design (iDAD)

Implicit Deep Adaptive Design (iDAD) This code supports the NeurIPS paper 'Implicit Deep Adaptive Design: Policy-Based Experimental Design without Lik

Desi 12 Aug 14, 2022
Make your AirPlay devices as TTS speakers

Apple AirPlayer Home Assistant integration component, make your AirPlay devices as TTS speakers. Before Use 2021.6.X or earlier Apple Airplayer compon

George Zhao 117 Dec 15, 2022
PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models

PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models This repository is the official implementation of the fol

DistributedML 41 Dec 06, 2022
这是一个facenet-pytorch的库,可以用于训练自己的人脸识别模型。

Facenet:人脸识别模型在Pytorch当中的实现 目录 性能情况 Performance 所需环境 Environment 注意事项 Attention 文件下载 Download 预测步骤 How2predict 训练步骤 How2train 参考资料 Reference 性能情况 训练数据

Bubbliiiing 210 Jan 06, 2023
TCube generates rich and fluent narratives that describes the characteristics, trends, and anomalies of any time-series data (domain-agnostic) using the transfer learning capabilities of PLMs.

TCube: Domain-Agnostic Neural Time series Narration This repository contains the code for the paper: "TCube: Domain-Agnostic Neural Time series Narrat

Mandar Sharma 7 Oct 31, 2021
Unofficial PyTorch Implementation of Multi-Singer

Multi-Singer Unofficial PyTorch Implementation of Multi-Singer: Fast Multi-Singer Singing Voice Vocoder With A Large-Scale Corpus. Requirements See re

SunMail-hub 123 Dec 28, 2022
This is the official PyTorch implementation for "Mesa: A Memory-saving Training Framework for Transformers".

A Memory-saving Training Framework for Transformers This is the official PyTorch implementation for Mesa: A Memory-saving Training Framework for Trans

Zhuang AI Group 105 Dec 06, 2022
Implementation of 'lightweight' GAN, proposed in ICLR 2021, in Pytorch. High resolution image generations that can be trained within a day or two

512x512 flowers after 12 hours of training, 1 gpu 256x256 flowers after 12 hours of training, 1 gpu Pizza 'Lightweight' GAN Implementation of 'lightwe

Phil Wang 1.5k Jan 02, 2023
Converts given image (png, jpg, etc) to amogus gif.

Image to Amogus Converter Converts given image (.png, .jpg, etc) to an amogus gif! Usage Place image in the /target/ folder (or anywhere realistically

Hank Magan 1 Nov 24, 2021
RL agent to play μRTS with Stable-Baselines3

Gym-μRTS with Stable-Baselines3/PyTorch This repo contains an attempt to reproduce Gridnet PPO with invalid action masking algorithm to play μRTS usin

Oleksii Kachaiev 24 Nov 11, 2022
AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning

AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning AutoPentest-DRL is an automated penetration testing framework based o

Cyber Range Organization and Design Chair 217 Jan 01, 2023
HiFT: Hierarchical Feature Transformer for Aerial Tracking (ICCV2021)

HiFT: Hierarchical Feature Transformer for Aerial Tracking Ziang Cao, Changhong Fu, Junjie Ye, Bowen Li, and Yiming Li Our paper is Accepted by ICCV 2

Intelligent Vision for Robotics in Complex Environment 55 Nov 23, 2022
An off-line judger supporting distributed problem repositories

Thaw 中文 | English Thaw is an off-line judger supporting distributed problem repositories. Everyone can use Thaw release problems with license on GitHu

countercurrent_time 2 Jan 09, 2022