This repository contains code to train and render Mixture of Volumetric Primitives (MVP) models

Related tags

Deep Learningmvp
Overview

Mixture of Volumetric Primitives -- Training and Evaluation

This repository contains code to train and render Mixture of Volumetric Primitives (MVP) models.

If you use Mixture of Volumetric Primitives in your research, please cite:
Mixture of Volumetric Primitives for Efficient Neural Rendering
Stephen Lombardi, Tomas Simon, Gabriel Schwartz, Michael Zollhoefer, Yaser Sheikh, Jason Saragih
ACM Transactions on Graphics (SIGGRAPH 2021) 40, 4. Article 59

@article{Lombardi21,
author = {Lombardi, Stephen and Simon, Tomas and Schwartz, Gabriel and Zollhoefer, Michael and Sheikh, Yaser and Saragih, Jason},
title = {Mixture of Volumetric Primitives for Efficient Neural Rendering},
year = {2021},
issue_date = {August 2021},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {40},
number = {4},
issn = {0730-0301},
url = {https://doi.org/10.1145/3450626.3459863},
doi = {10.1145/3450626.3459863},
journal = {ACM Trans. Graph.},
month = {jul},
articleno = {59},
numpages = {13},
keywords = {neural rendering}
}

Requirements

  • Python (3.8+)
    • PyTorch
    • NumPy
    • SciPy
    • Pillow
    • OpenCV
  • ffmpeg (in $PATH to render videos)
  • CUDA 10 or higher

Building

The repository contains two CUDA PyTorch extensions. To build, cd to each directory and use make:

cd extensions/mvpraymarcher
make
cd -
cd extensions/utils
make

How to Use

There are two main scripts in the root directory: train.py and render.py. The scripts take a configuration file for the experiment that defines the dataset used and the options for the model (e.g., the type of decoder that is used).

Download the latest release on Github to get the experiments directory.

To train the model:

python train.py experiments/dryice1/experiment1/config.py

To render a video of a trained model:

python render.py experiments/dryice1/experiment1/config.py

See ARCHITECTURE.md for more details.

Training Data

See the latest Github release for data.

Using your own Data

Implement your own Dataset class to return images and camera parameters. An example is given in data.multiviewvideo. A dataset class will need to return camera pose parameters, image data, and tracked mesh data.

How to Extend

See ARCHITECTURE.md

License

See the LICENSE file for details.

Comments
  • ModuleNotFoundError: No module named 'utilslib'

    ModuleNotFoundError: No module named 'utilslib'

    Hi , thanks for share with this awsome job , have nice day:-) When I just want to render the demo in the experiment data ,I got this error :-) By the way ,I have compiled the extension file 2022-07-06 23-31-21 的屏幕截图

    opened by Myzhencai 9
  • build success, but cannot run

    build success, but cannot run

    Traceback (most recent call last):
      File "render.py", line 118, in <module>
        output, _ = ae(
      File "/home/an/anaconda3/envs/py38-t19/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/an/an_project/mvp/models/volumetric.py", line 286, in forward
        rayrgba, rmlosses = self.raymarcher(raypos, raydir, tminmax,
      File "/home/an/anaconda3/envs/py38-t19/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/an/an_project/mvp/models/raymarchers/mvpraymarcher.py", line 32, in forward
        rayrgba = mvpraymarch(raypos, raydir, dt, tminmax,
      File "/home/an/an_project/mvp/extensions/mvpraymarch/mvpraymarch.py", line 273, in mvpraymarch
        out = MVPRaymarch.apply(raypos, raydir, stepsize, tminmax,
      File "/home/an/an_project/mvp/extensions/mvpraymarch/mvpraymarch.py", line 119, in forward
        sortedobjid, nodechildren, nodeaabb = build_accel(primtransfin,
      File "/home/an/an_project/mvp/extensions/mvpraymarch/mvpraymarch.py", line 44, in build_accel
        sortedobjid = (torch.arange(N*K, dtype=torch.int32, device=dev) % K).view(N, K)
    RuntimeError: CUDA error: no kernel image is available for execution on the device
    
    

    use: python 3.8.13 pytorch 1.13.0a0+git4503c45 cuda 11.3.0 gcc version 8.4.0


    Hi, are you having similar issues?
    pytorch is working normally.

    opened by AN-ZE 8
  • What's basetransf matrix used for?

    What's basetransf matrix used for?

    Hi, I have a little question when I apply my own data using this code. What's the self.basetransf matrix used for as in multiviewvideo.py do? I find this 3x4 matrix is applied for all camera poses and all the frametransf, but wht's the purpose for it? :)

    https://github.com/facebookresearch/mvp/blob/d758f53662e79d7fec885f4dd1a3ee457f7c4b00/data/multiviewvideo.py#L410-L415

    https://github.com/facebookresearch/mvp/blob/d758f53662e79d7fec885f4dd1a3ee457f7c4b00/data/multiviewvideo.py#L385-L387

    Besides, I find it necessary to apply this basetransf, because when I change it to an eyes matrix, it didn't converge during training. So how to get my own basetransf?

    Your answer will help me a lot! Thank you!

    opened by Qingcsai 5
  • Background image uesed in Lombardi‘s MVP not be found in multiface dataset

    Background image uesed in Lombardi‘s MVP not be found in multiface dataset

    Multiface dataset has been used in "Mixture of Volumetric Primitives for Efficient Neural Rendering". The "mvp" onfig file needs the path to the background image. But I can't find the background image in multiface dataset. The code in config file is: bgpath = os.path.join(imagepathbase, 'bg', image', 'cam{cam}', 'image0000.png').

    opened by shuishiwojiade 2
  • Can not build the cuda pytorch extension

    Can not build the cuda pytorch extension

    May I know what pytorch version and cuda version are used to build the two CUDA PyTorch extensions? I was using pytorch 1.7.1, cuda 10.1 and gcc 5.5.0, but I got the error "torch/utils/cpp_extension.py", line 445, in unix_wrap_ninja_compile post_cflags = extra_postargs['cxx'] KeyError: 'cxx' when I tried to build the two CUDA PyTorch extensions. Any suggestions on how to solve the problem?

    opened by ZhaoyangLyu 2
  • Bug Report: mvp/extensions/mvpraymarch/bvh.cu error: too many initializer values

    Bug Report: mvp/extensions/mvpraymarch/bvh.cu error: too many initializer values

    Hi, I meet a bug after cd to extensions/mvpraymarch and run "make" command. Could someone kindly help me to solve the problem?

    I use a remote server with: gcc (Ubuntu 7.5.0-3ubuntu1~16.04) 7.5.0, g++ (Ubuntu 7.5.0-3ubuntu1~16.04) 7.5.0, GNU Make 4.1, Ubuntu 16.04.7 LTS, Python 3.9.12, nvcc 10.2.

    The bug is from pointer assignment as shown in the screenshot. K

    Below is the output message after run "make" command in extensions/mvpraymarch directory:

    python setup.py build_ext --inplace CUDA_HOME: /data/hzhangcc/cuda-10.2 CUDNN_HOME: None running build_ext building 'mvpraymarchlib' extension Emitting ninja build file /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/2] /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/bvh.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 FAILED: /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/bvh.o /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/bvh.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu(214): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/bvh.cu(272): error: too many initializer values

    2 errors detected in the compilation of "/tmp/tmpxft_00003cf1_00000000-6_bvh.cpp1.ii". [2/2] /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/mvpraymarch_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 FAILED: /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/mvpraymarch_kernel.o /data/hzhangcc/cuda-10.2/bin/nvcc -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/TH -I/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/include/THC -I/data/hzhangcc/cuda-10.2/include -I/data/hzhangcc/anaconda3/include/python3.9 -c -c /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu -o /data/hzhangcc/mvp/extensions/mvpraymarch/build/temp.linux-x86_64-3.9/mvpraymarch_kernel.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -use_fast_math -arch=sm_70 -std=c++14 -lineinfo -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=mvpraymarchlib -D_GLIBCXX_USE_CXX11_ABI=0 /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(80): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(87): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(93): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(99): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(167): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(174): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(180): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_kernel.cu(186): error: too many initializer values

    /data/hzhangcc/mvp/extensions/mvpraymarch/mvpraymarch_subset_kernel.h(30): warning: variable "validthread" was declared but never referenced

    8 errors detected in the compilation of "/tmp/tmpxft_00003cf2_00000000-6_mvpraymarch_kernel.cpp1.ii". ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1814, in _run_ninja_build subprocess.run( File "/data/hzhangcc/anaconda3/lib/python3.9/subprocess.py", line 528, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last): File "/data/hzhangcc/mvp/extensions/mvpraymarch/setup.py", line 13, in setup( File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/init.py", line 87, in setup return distutils.core.setup(**attrs) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 148, in setup return run_commands(dist) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 163, in run_commands dist.run_commands() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands self.run_command(cmd) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/dist.py", line 1214, in run_command super().run_command(command) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command cmd_obj.run() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run _build_ext.build_ext.run(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 339, in run self.build_extensions() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 771, in build_extensions build_ext.build_extensions(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions _build_ext.build_ext.build_extensions(self) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 448, in build_extensions self._build_extensions_serial() File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 473, in _build_extensions_serial self.build_extension(ext) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 202, in build_extension _build_ext.build_extension(self, ext) File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 528, in build_extension objects = self.compiler.compile(sources, File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 592, in unix_wrap_ninja_compile _write_ninja_file_and_compile_objects( File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1493, in _write_ninja_file_and_compile_objects _run_ninja_build( File "/data/hzhangcc/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1830, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error compiling objects for extension makefile:2: recipe for target 'all' failed make: *** [all] Error 1

    opened by Wushanfangniuwa 1
  • Traning data for MVP

    Traning data for MVP

    Hi, thank you for sharing your wonderful work! I was able to train the Neural Volume data. I was wondering if the training data for MVP will be released in the near future. Thank you!

    opened by weilunhuang-jhu 1
  • miss files of example?

    miss files of example?

    Hi, I notice there are some discrepancies between the latest release and README.md and mvp/ARCHITECTURE.md. Are there some files of example missing? Such as 'experiments/dryice1/experiment1/config.py' metioned in README.md or 'experiments/example/config.py' metioned in mvp/ARCHITECTURE.md? I follow the steps in README.md but can not train or render the model.

    opened by faneggs 1
  • Training time

    Training time

    Awesome Work! I have been a huge fan of your works since Neural Volumes. This work also seems very interesting!

    How long does it take for training?

    Thank you!

    opened by yeong5366 1
  • render .py not exporting the images for

    render .py not exporting the images for "render_rotate.mp4"

    Hi, thank you for sharing your great work! I'm trying it work the repository's code of the "neuralvolumes" and "mvp" of your great works with each experiments data you provided. "neuralvolumes" worked well! Thank you!! I was able to exec train.py and render.py of the "NeuralVolume" code of your previous work,and could get the images sequence of "prog_XXXXXX.jpg" and "render_rotate.mp4" movie file. Then I'm trying it work "mvp" at the same emvironment,and I did success to work train.py and could get the images sequence of "prog_XXXXXX.jpg".And "log.txt","optimparams.pt","aeparams.pt" file too. But I'm trying exec render.py,the image sequence files of consisting for "render_rotate.mp4" are not exported at /tmp/xxxxxxxxxx/ directory.

    The error message is [image2 @ 0x56400177b780] Could find no file with path '/tmp/5613023327/%06d.png' and index in the range 0-4 /tmp/5613023327/%06d.png: No such file or directory

    Do you have any information about this error?

    My environment is ubuntu20.04LTS, Python 3.8.13, GCC 9.4.0, PyTorch 1.10.1+cu113,GPU_A6000. The setup.py has been fixed about cuda arch at the mvpraymarch and the utils directory.

    opened by ppponpon 3
  • Understanding about the opacity fade factor

    Understanding about the opacity fade factor

    Thanks for sharing this awsome job. I have read the paper. And I have some questions about the opacity fade factor.

    1. Due to the fade factor, the opacity downsample at the volume edge. Does it make the primitives scale become bigger to cover the scene?(maybe opposite to the Volume Minimization Prior)
    2. Is there any experiments to ensure the parameters $\alpha$ and $\beta$?
    3. Does the fade factor make the center of the primitives more close to high occupancy point?
    4. And what is the relationship between stylegan2 and fade factor?

    And is there any suggestions to understand the cuda code of raymarching? I haven't do anything rely to cuda parallel.

    opened by LSQsjtu 0
  •  How you can reconstruct the mesh from images with different views

    How you can reconstruct the mesh from images with different views

    Hi, I noticed that the multi-view images of different frames of the same expression with the same id in your dataset have fixed camera parameters when reconstructing the mesh. When I try to perform mesh reconstruction from multi-view images, the camera parameters obtained from multi-view image reconstruction from different frames are all different. I was wondering how you fixed the camera parameters for mesh reconstruction.

    opened by LiTian0215 1
  • Cannot replicate experiments/neuralvolumes results: Completely vague output and vanishing kldiv

    Cannot replicate experiments/neuralvolumes results: Completely vague output and vanishing kldiv

    Hi, could someone kindly help me? I cannot replicate the output results in experiments/neuralvolumes. I have successfully built the extension and download the experiment.zip in the latest release. However, the output images are completely vague. I found that my kldiv term is quickly vanishing to 0, while in the example log.txt file given in experiment.zip, the kldiv terms remain larger than 0.3.

    Below are my outputs after 79579, 92139, and 106682 iterations. prog_079579 prog_092139 prog_106682

    Here I append my log.txt file which contains my configuration information and training statistics: log.txt

    I'm not sure where is the problem. Incorrect camera pose? Or the code in this repository has some bugs. Really hope some kind guy could help me. Thanks! :)

    opened by Wushanfangniuwa 2
  • About how is the tracking mesh built?

    About how is the tracking mesh built?

    Hello, I have a new problem: I am trying to use the video data of my head to reconstruct mesh and texture as the input and supervision of the network. I am trying to use colmap to reconstruct the point cloud and retargeting it to a public model with 7306 vertices; But the effect is very poor. Is there any method to build mesh for reference?

    opened by Luh1124 6
Releases(v0.1)
  • v0.1(Jan 6, 2022)

    This is an initial release of the Mixture of Volumetric Primitives code. This release includes code for training, rendering, and evaluation. Bundled with this release is a pretrained MVP model. Note that training data for MVP is not included with this release, but will be released in the future. To give an example of how to use the training code, training data and a training configuration file for Neural Volumes is included.

    Source code(tar.gz)
    Source code(zip)
    experiments.zip(1069.70 MB)
Owner
Meta Research
Meta Research
ReSSL: Relational Self-Supervised Learning with Weak Augmentation

ReSSL: Relational Self-Supervised Learning with Weak Augmentation This repository contains PyTorch evaluation code, training code and pretrained model

mingkai 45 Oct 25, 2022
Source code for "MusCaps: Generating Captions for Music Audio" (IJCNN 2021)

MusCaps: Generating Captions for Music Audio Ilaria Manco1 2, Emmanouil Benetos1, Elio Quinton2, Gyorgy Fazekas1 1 Queen Mary University of London, 2

Ilaria Manco 57 Dec 07, 2022
Pairwise model for commonlit competition

Pairwise model for commonlit competition To run: - install requirements - create input directory with train_folds.csv and other competition data - cd

abhishek thakur 45 Aug 31, 2022
neural image generation

pixray Pixray is an image generation system. It combines previous ideas including: Perception Engines which uses image augmentation and iteratively op

dribnet 398 Dec 17, 2022
Face Depixelizer based on "PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models" repository.

NOTE We have noticed a lot of concern that PULSE will be used to identify individuals whose faces have been blurred out. We want to emphasize that thi

Denis Malimonov 2k Dec 29, 2022
The-Secret-Sharing-Schemes - This interactive script demonstrates the Secret Sharing Schemes algorithm

The-Secret-Sharing-Schemes This interactive script demonstrates the Secret Shari

Nishaant Goswamy 1 Jan 02, 2022
Controlling the MicriSpotAI robot from scratch

Project-MicroSpot-AI Controlling the MicriSpotAI robot from scratch Colaborators Alexander Dennis Components from MicroSpot The MicriSpotAI has the fo

Dennis Núñez-Fernández 5 Oct 20, 2022
A fast MoE impl for PyTorch

An easy-to-use and efficient system to support the Mixture of Experts (MoE) model for PyTorch.

Rick Ho 873 Jan 09, 2023
Adaptive, interpretable wavelets across domains (NeurIPS 2021)

Adaptive wavelets Wavelets which adapt given data (and optionally a pre-trained model). This yields models which are faster, more compressible, and mo

Yu Group 50 Dec 16, 2022
Re-implement CycleGAN in Tensorlayer

CycleGAN_Tensorlayer Re-implement CycleGAN in TensorLayer Original CycleGAN Improved CycleGAN with resize-convolution Prerequisites: TensorLayer Tenso

89 Aug 15, 2022
Reverse engineer your pytorch vision models, in style

🔍 Rover Reverse engineer your CNNs, in style Rover will help you break down your CNN and visualize the features from within the model. No need to wri

Mayukh Deb 32 Sep 24, 2022
a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LSTM layers

RNN-Playwrite a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LS

Arno Barton 1 Oct 29, 2021
Implementation of ConvMixer in TensorFlow and Keras

ConvMixer ConvMixer, an extremely simple model that is similar in spirit to the ViT and the even-more-basic MLP-Mixer in that it operates directly on

Sayan Nath 8 Oct 03, 2022
Official Implementation of Domain-Aware Universal Style Transfer

Domain Aware Universal Style Transfer Official Pytorch Implementation of 'Domain Aware Universal Style Transfer' (ICCV 2021) Domain Aware Universal St

KibeomHong 80 Dec 30, 2022
FB-tCNN for SSVEP Recognition

FB-tCNN for SSVEP Recognition Here are the codes of the tCNN and FB-tCNN in the paper "Filter Bank Convolutional Neural Network for Short Time-Window

Wenlong Ding 12 Dec 14, 2022
Face-Recognition-based-Attendance-System - An implementation of Attendance System in python.

Face-Recognition-based-Attendance-System A real time implementation of Attendance System in python. Pre-requisites To understand the implentation of F

Muhammad Zain Ul Haque 1 Dec 31, 2021
Repo for 2021 SDD assessment task 2, by Felix, Anna, and James.

SoftwareTask2 Repo for 2021 SDD assessment task 2, by Felix, Anna, and James. File/folder structure: helloworld.py - demonstrates various map backgrou

3 Dec 13, 2022
Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks

MGANs Training & Testing code (torch), pre-trained models and supplementary materials for "Precomputed Real-Time Texture Synthesis with Markovian Gene

290 Nov 15, 2022
Neural Dynamic Policies for End-to-End Sensorimotor Learning

This is a PyTorch based implementation for our NeurIPS 2020 paper on Neural Dynamic Policies for end-to-end sensorimotor learning.

Shikhar Bahl 47 Dec 11, 2022
(Preprint) Official PyTorch implementation of "How Do Vision Transformers Work?"

(Preprint) Official PyTorch implementation of "How Do Vision Transformers Work?"

xxxnell 656 Dec 30, 2022