Totally Versatile Miscellanea for Pytorch

Overview

Totally Versatile Miscellania for PyTorch

Thomas Viehmann [email protected]

This repository collects various things I have implmented for PyTorch

Layers, autogra functions and calculations

Learning approaches

Generative Adversarial Networks

Wasserstein GAN - See also my two blog posts on the subject

Comments
  • Need pytorch nightly?

    Need pytorch nightly?

    Hi! Thank you for making the Wasserstein loss extension available. Forgive me if this isn't an issue, I am not an expert user. I just wanted to comment that when I tried to run the extension in my computer (torch 1.1.0) I was getting this error in compilation time:

    error: identifier "TORCH_CHECK" is undefined
    

    After installing the latest pytorch-nightly everything seems to run smoothly, so I guess this may be a requirement?

    opened by agaldran 4
  • Problem with scripting the model

    Problem with scripting the model

    Hi Sir,

    I have started learning torchscript and your blog was a great source to understand JIT. I tried to run the notebook pytorch_automatic_optimization_jit.ipynb but I am unable to run the c++, CUDA, CPU kernels also I am unable to get the similar graph present in the notebook. I have attached the link of the colab, I am working with.

    I request you to help me with this problem

    Colan Notebook

    opened by Midhilesh29 1
  • Error building extension 'wasserstein'

    Error building extension 'wasserstein'

    Hi @t-vi

    First of all, thank you for sharing your impressive work. Right now I'm using the code you used for comparison to calculate Wasserstein loss. However, that take around 4 minutes for one batch in my case. That takes too long. And your work seems much faster.

    However, when I trying to run your code on my server, I got error like below: Do you know what this means. The server I used is a team server, I don't want to change gcc without know if they will massup the current environment.

    Appreciate any help you can provide!

    /home/anyu/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py:118: UserWarning:

                               !! WARNING !!
    

    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Your compiler (c++) may be ABI-incompatible with PyTorch! Please use a compiler that is ABI-compatible with GCC 4.9 and above. See https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html.

    See https://gist.github.com/goldsborough/d466f43e8ffc948ff92de7486c5216d6 for instructions on how to install GCC 4.9 or higher. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

                              !! WARNING !!
    

    warnings.warn(ABI_INCOMPATIBILITY_WARNING.format(compiler))

    CalledProcessError Traceback (most recent call last) ~/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py in _build_extension_module(name, build_directory) 758 subprocess.check_output( --> 759 ['ninja', '-v'], stderr=subprocess.STDOUT, cwd=build_directory) 760 except subprocess.CalledProcessError:

    ~/anaconda3/lib/python3.6/subprocess.py in check_output(timeout, *popenargs, **kwargs) 335 return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, --> 336 **kwargs).stdout 337

    ~/anaconda3/lib/python3.6/subprocess.py in run(input, timeout, check, *popenargs, **kwargs) 417 raise CalledProcessError(retcode, process.args, --> 418 output=stdout, stderr=stderr) 419 return CompletedProcess(process.args, retcode, stdout, stderr)

    CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

    During handling of the above exception, another exception occurred:

    RuntimeError Traceback (most recent call last) in () 1 import torch 2 wasserstein_ext = torch.utils.cpp_extension.load_inline("wasserstein", cpp_sources="", cuda_sources=cuda_source, ----> 3 extra_cuda_cflags=["--expt-relaxed-constexpr"] ) 4 5 def sinkstep(dist, log_nu, log_u, lam: float):

    ~/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py in load_inline(name, cpp_sources, cuda_sources, functions, extra_cflags, extra_cuda_cflags, extra_ldflags, extra_include_paths, build_directory, verbose, with_cuda) 639 build_directory, 640 verbose, --> 641 with_cuda=with_cuda) 642 643

    ~/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py in _jit_compile(name, sources, extra_cflags, extra_cuda_cflags, extra_ldflags, extra_include_paths, build_directory, verbose, with_cuda) 680 if verbose: 681 print('Building extension module {}...'.format(name)) --> 682 _build_extension_module(name, build_directory) 683 finally: 684 baton.release()

    ~/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py in _build_extension_module(name, build_directory) 763 # error.output contains the stdout and stderr of the build attempt. 764 raise RuntimeError("Error building extension '{}': {}".format( --> 765 name, error.output.decode())) 766 767

    RuntimeError: Error building extension 'wasserstein': [1/3] /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=wasserstein -I/home/anyu/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/anyu/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/anyu/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/anyu/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 --compiler-options '-fPIC' --expt-relaxed-constexpr -std=c++11 -c /tmp/torch_extensions/wasserstein/cuda.cu -o cuda.cuda.o FAILED: cuda.cuda.o /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=wasserstein -I/home/anyu/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/anyu/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/anyu/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/anyu/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 --compiler-options '-fPIC' --expt-relaxed-constexpr -std=c++11 -c /tmp/torch_extensions/wasserstein/cuda.cu -o cuda.cuda.o /tmp/torch_extensions/wasserstein/cuda.cu:6:29: fatal error: torch/extension.h: No such file or directory compilation terminated. [2/3] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=wasserstein -I/home/anyu/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/anyu/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/anyu/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/anyu/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -c /tmp/torch_extensions/wasserstein/main.cpp -o main.o ninja: build stopped: subcommand failed.

    opened by anyuzoey 1
  • FileNotFoundError: [Errno 2] No such file or directory: 'ninja': 'ninja'

    FileNotFoundError: [Errno 2] No such file or directory: 'ninja': 'ninja'

    Why I had install ninja with conda but still met this bug?? Please help me! T_T

    ninja --version

    1.7.2

    $ nvcc --version

    nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2019 NVIDIA Corporation Built on Sun_Jul_28_19:07:16_PDT_2019 Cuda compilation tools, release 10.1, V10.1.243

    pytorch 1.2.0

    py3.7_cuda10.0.130_cudnn7.6.2_0

    output

    Traceback (most recent call last):
      File "/home/lowen/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 890, in verify_ninja_availability
        subprocess.check_call('ninja --version'.split(), stdout=devnull)
      File "/home/lowen/anaconda3/envs/pytorch/lib/python3.7/subprocess.py", line 342, in check_call
        retcode = call(*popenargs, **kwargs)
      File "/home/lowen/anaconda3/envs/pytorch/lib/python3.7/subprocess.py", line 323, in call
        with Popen(*popenargs, **kwargs) as p:
      File "/home/lowen/anaconda3/envs/pytorch/lib/python3.7/subprocess.py", line 775, in __init__
        restore_signals, start_new_session)
      File "/home/lowen/anaconda3/envs/pytorch/lib/python3.7/subprocess.py", line 1522, in _execute_child
        raise child_exception_type(errno_num, err_msg, err_filename)
    FileNotFoundError: [Errno 2] No such file or directory: 'ninja': 'ninja'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/devdata/new_Relation_Extraction/test_wasserstein.py", line 208, in <module>
        extra_cuda_cflags=["--expt-relaxed-constexpr"])
      File "/home/lowen/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 787, in load_inline
        is_python_module)
      File "/home/lowen/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 827, in _jit_compile
        with_cuda=with_cuda)
      File "/home/lowen/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 850, in _write_ninja_file_and_build
        verify_ninja_availability()
      File "/home/lowen/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 892, in verify_ninja_availability
        raise RuntimeError("Ninja is required to load C++ extensions")
    RuntimeError: Ninja is required to load C++ extensions
    

    code

    import math
    import torch
    import torch.utils
    import torch.utils.cpp_extension
    # % matplotlib inline
    #
    
    # from matplotlib import pyplot
    # import matplotlib.transforms
    #
    # import ot  # for comparison
    
    cuda_source = """
    
    #include <torch/extension.h>
    #include <ATen/core/TensorAccessor.h>
    #include <ATen/cuda/CUDAContext.h>
    
    using at::RestrictPtrTraits;
    using at::PackedTensorAccessor;
    
    #if defined(__HIP_PLATFORM_HCC__)
    constexpr int WARP_SIZE = 64;
    #else
    constexpr int WARP_SIZE = 32;
    #endif
    
    // The maximum number of threads in a block
    #if defined(__HIP_PLATFORM_HCC__)
    constexpr int MAX_BLOCK_SIZE = 256;
    #else
    constexpr int MAX_BLOCK_SIZE = 512;
    #endif
    
    // Returns the index of the most significant 1 bit in `val`.
    __device__ __forceinline__ int getMSB(int val) {
      return 31 - __clz(val);
    }
    
    // Number of threads in a block given an input size up to MAX_BLOCK_SIZE
    static int getNumThreads(int nElem) {
    #if defined(__HIP_PLATFORM_HCC__)
      int threadSizes[5] = { 16, 32, 64, 128, MAX_BLOCK_SIZE };
    #else
      int threadSizes[5] = { 32, 64, 128, 256, MAX_BLOCK_SIZE };
    #endif
      for (int i = 0; i != 5; ++i) {
        if (nElem <= threadSizes[i]) {
          return threadSizes[i];
        }
      }
      return MAX_BLOCK_SIZE;
    }
    
    
    template <typename T>
    __device__ __forceinline__ T WARP_SHFL_XOR(T value, int laneMask, int width = warpSize, unsigned int mask = 0xffffffff)
    {
    #if CUDA_VERSION >= 9000
        return __shfl_xor_sync(mask, value, laneMask, width);
    #else
        return __shfl_xor(value, laneMask, width);
    #endif
    }
    
    // While this might be the most efficient sinkhorn step / logsumexp-matmul implementation I have seen,
    // this is awfully inefficient compared to matrix multiplication and e.g. NVidia cutlass may provide
    // many great ideas for improvement
    template <typename scalar_t, typename index_t>
    __global__ void sinkstep_kernel(
      // compute log v_bj = log nu_bj - logsumexp_i 1/lambda dist_ij - log u_bi
      // for this compute maxdiff_bj = max_i(1/lambda dist_ij - log u_bi)
      // i = reduction dim, using threadIdx.x
      PackedTensorAccessor<scalar_t, 2, RestrictPtrTraits, index_t> log_v,
      const PackedTensorAccessor<scalar_t, 2, RestrictPtrTraits, index_t> dist,
      const PackedTensorAccessor<scalar_t, 2, RestrictPtrTraits, index_t> log_nu,
      const PackedTensorAccessor<scalar_t, 2, RestrictPtrTraits, index_t> log_u,
      const scalar_t lambda) {
    
      using accscalar_t = scalar_t;
    
      __shared__ accscalar_t shared_mem[2 * WARP_SIZE];
    
      index_t b = blockIdx.y;
      index_t j = blockIdx.x;
      int tid = threadIdx.x;
    
      if (b >= log_u.size(0) || j >= log_v.size(1)) {
        return;
      }
      // reduce within thread
      accscalar_t max = -std::numeric_limits<accscalar_t>::infinity();
      accscalar_t sumexp = 0;
    
      if (log_nu[b][j] == -std::numeric_limits<accscalar_t>::infinity()) {
        if (tid == 0) {
          log_v[b][j] = -std::numeric_limits<accscalar_t>::infinity();
        }
        return;
      }
    
      for (index_t i = threadIdx.x; i < log_u.size(1); i += blockDim.x) {
        accscalar_t oldmax = max;
        accscalar_t value = -dist[i][j]/lambda + log_u[b][i];
        max = max > value ? max : value;
        if (oldmax == -std::numeric_limits<accscalar_t>::infinity()) {
          // sumexp used to be 0, so the new max is value and we can set 1 here,
          // because we will come back here again
          sumexp = 1;
        } else {
          sumexp *= exp(oldmax - max);
          sumexp += exp(value - max); // if oldmax was not -infinity, max is not either...
        }
      }
    
      // now we have one value per thread. we'll make it into one value per warp
      // first warpSum to get one value per thread to
      // one value per warp
      for (int i = 0; i < getMSB(WARP_SIZE); ++i) {
        accscalar_t o_max    = WARP_SHFL_XOR(max, 1 << i, WARP_SIZE);
        accscalar_t o_sumexp = WARP_SHFL_XOR(sumexp, 1 << i, WARP_SIZE);
        if (o_max > max) { // we're less concerned about divergence here
          sumexp *= exp(max - o_max);
          sumexp += o_sumexp;
          max = o_max;
        } else if (max != -std::numeric_limits<accscalar_t>::infinity()) {
          sumexp += o_sumexp * exp(o_max - max);
        }
      }
    
      __syncthreads();
      // this writes each warps accumulation into shared memory
      // there are at most WARP_SIZE items left because
      // there are at most WARP_SIZE**2 threads at the beginning
      if (tid % WARP_SIZE == 0) {
        shared_mem[tid / WARP_SIZE * 2] = max;
        shared_mem[tid / WARP_SIZE * 2 + 1] = sumexp;
      }
      __syncthreads();
      if (tid < WARP_SIZE) {
        max = (tid < blockDim.x / WARP_SIZE ? shared_mem[2 * tid] : -std::numeric_limits<accscalar_t>::infinity());
        sumexp = (tid < blockDim.x / WARP_SIZE ? shared_mem[2 * tid + 1] : 0);
      }
      for (int i = 0; i < getMSB(WARP_SIZE); ++i) {
        accscalar_t o_max    = WARP_SHFL_XOR(max, 1 << i, WARP_SIZE);
        accscalar_t o_sumexp = WARP_SHFL_XOR(sumexp, 1 << i, WARP_SIZE);
        if (o_max > max) { // we're less concerned about divergence here
          sumexp *= exp(max - o_max);
          sumexp += o_sumexp;
          max = o_max;
        } else if (max != -std::numeric_limits<accscalar_t>::infinity()) {
          sumexp += o_sumexp * exp(o_max - max);
        }
      }
    
      if (tid == 0) {
        log_v[b][j] = (max > -std::numeric_limits<accscalar_t>::infinity() ?
                       log_nu[b][j] - log(sumexp) - max :
                       -std::numeric_limits<accscalar_t>::infinity());
      }
    }
    
    template <typename scalar_t>
    torch::Tensor sinkstep_cuda_template(const torch::Tensor& dist, const torch::Tensor& log_nu, const torch::Tensor& log_u,
                                         const double lambda) {
      TORCH_CHECK(dist.is_cuda(), "need cuda tensors");
      TORCH_CHECK(dist.device() == log_nu.device() && dist.device() == log_u.device(), "need tensors on same GPU");
      TORCH_CHECK(dist.dim()==2 && log_nu.dim()==2 && log_u.dim()==2, "invalid sizes");
      TORCH_CHECK(dist.size(0) == log_u.size(1) &&
               dist.size(1) == log_nu.size(1) &&
               log_u.size(0) == log_nu.size(0), "invalid sizes");
      auto log_v = torch::empty_like(log_nu);
      using index_t = int32_t;
    
      auto log_v_a = log_v.packed_accessor<scalar_t, 2, RestrictPtrTraits, index_t>();
      auto dist_a = dist.packed_accessor<scalar_t, 2, RestrictPtrTraits, index_t>();
      auto log_nu_a = log_nu.packed_accessor<scalar_t, 2, RestrictPtrTraits, index_t>();
      auto log_u_a = log_u.packed_accessor<scalar_t, 2, RestrictPtrTraits, index_t>();
    
      auto stream = at::cuda::getCurrentCUDAStream();
    
      int tf = getNumThreads(log_u.size(1));
      dim3 blocks(log_v.size(1), log_u.size(0));
      dim3 threads(tf);
    
      sinkstep_kernel<<<blocks, threads, 2*WARP_SIZE*sizeof(scalar_t), stream>>>(
        log_v_a, dist_a, log_nu_a, log_u_a, static_cast<scalar_t>(lambda)
        );
    
      return log_v;
    }
    
    torch::Tensor sinkstep_cuda(const torch::Tensor& dist, const torch::Tensor& log_nu, const torch::Tensor& log_u,
                                const double lambda) {
        return AT_DISPATCH_FLOATING_TYPES(log_u.scalar_type(), "sinkstep", [&] {
           return sinkstep_cuda_template<scalar_t>(dist, log_nu, log_u, lambda);
        });
    }
    
    PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
      m.def("sinkstep", &sinkstep_cuda, "sinkhorn step");
    }
    
    """
    
    wasserstein_ext = torch.utils.cpp_extension.load_inline("wasserstein", cpp_sources="", cuda_sources=cuda_source,
                                                            extra_cuda_cflags=["--expt-relaxed-constexpr"])
    
    opened by heslowen 1
  • Confusion about Lambda

    Confusion about Lambda

    Hello, Firstly thank you for the awesome work! I had a question in the Pytorch_Wasserstein.ipynb:

    In the WassersteinLossVanilla, why is it self.K = torch.exp(-self.cost/self.lam)? Shouldn't it be
    self.K = torch.exp(-self.cost*self.lam)?

    In mocha also it is the above https://github.com/pluskid/Mocha.jl/blob/5e15b882d7dd615b0c5159bb6fde2cc040b2d8ee/src/layers/wasserstein-loss.jl#L33

    Have you changed it because "Note that we use a different convention for $\lambda$ (i.e. we use $\lambda$ as the weight for the regularisation, later versions of the above use $\lambda^-1$ as the weight)." ?

    Also what is the reason for the above?

    opened by ForgottenOneNyx 1
  • issue about pytorch wassdistance

    issue about pytorch wassdistance

    I tried to reproduce the pytorch wassdistance under windows system,but it show some problems bellow Traceback (most recent call last): File "", line 1, in File "C:\Users\Alienware.conda\envs\pytorch\lib\site-packages\torch\utils\cpp_extension.py", line 1293, in load_inline return _jit_compile( File "C:\Users\Alienware.conda\envs\pytorch\lib\site-packages\torch\utils\cpp_extension.py", line 1382, in _jit_compile return _import_module_from_library(name, build_directory, is_python_module) File "C:\Users\Alienware.conda\envs\pytorch\lib\site-packages\torch\utils\cpp_extension.py", line 1775, in _import_module_from_library module = importlib.util.module_from_spec(spec) File "", line 556, in module_from_spec File "", line 1166, in create_module File "", line 219, in _call_with_frames_removed ImportError: DLL load failed while importing wasserstein: The specified module could not be found

    opened by MinttHu 0
  • Consider transfering `load_inline` to `setuptools`?

    Consider transfering `load_inline` to `setuptools`?

    Hi! Thanks a lot for your great work about the wasserstein distance <Pytorch_Wasserstein.ipynb>!

    Since torch.utils.cpp_extension.load_inline will compile the cuda code every run, would you consider making it to setuptools, i.e., python setup.py install, so that one could load pre-build libraries?

    Sorry but I'm not familiar with this. Is there any barrier?

    Thanks!

    opened by yd-yin 0
  • Wasserstein implementation does not seem to be fully

    Wasserstein implementation does not seem to be fully "batched"

    Hi @t-vi,

    Thanks for sharing your code!

    I would like to ask a question regarding your implementation of the Sinkhorn algorithm. You stated that one of the main motivations was to obtain efficient batched computation. However, looking at the code I observe that it only supports the case where the cost matrix is the same across the batch:

    def forward(ctx, mu, nu, dist, lam=1e-3, N=100):
            assert mu.dim() == 2 and nu.dim() == 2 and dist.dim() == 2
            bs = mu.size(0)
            d1, d2 = dist.size()
            assert nu.size(0) == bs and mu.size(1) == d1 and nu.size(1) == d2
    

    That is, the shape dist is d1 x d2 instead of bs x d1 x d2. Is this expected?

    Thank you in advance for your reply.

    opened by netw0rkf10w 1
Releases(2018-03-13)
Owner
Thomas Viehmann
Mathematics and Inference at @MathInf I do a lot of @PyTorch work
Thomas Viehmann
PyTorch implementation of UPFlow (unsupervised optical flow learning)

UPFlow: Upsampling Pyramid for Unsupervised Optical Flow Learning By Kunming Luo, Chuan Wang, Shuaicheng Liu, Haoqiang Fan, Jue Wang, Jian Sun Megvii

kunming luo 87 Dec 20, 2022
Cross-modal Deep Face Normals with Deactivable Skip Connections

Cross-modal Deep Face Normals with Deactivable Skip Connections Victoria Fernández Abrevaya*, Adnane Boukhayma*, Philip H. S. Torr, Edmond Boyer (*Equ

72 Nov 27, 2022
This repo is customed for VisDrone.

Object Detection for VisDrone(无人机航拍图像目标检测) My environment 1、Windows10 (Linux available) 2、tensorflow = 1.12.0 3、python3.6 (anaconda) 4、cv2 5、ensemble

53 Jul 17, 2022
PyTorch Implementation of DiffGAN-TTS: High-Fidelity and Efficient Text-to-Speech with Denoising Diffusion GANs

DiffGAN-TTS - PyTorch Implementation PyTorch implementation of DiffGAN-TTS: High

Keon Lee 157 Jan 01, 2023
Official PyTorch implementation of the paper "Self-Supervised Relational Reasoning for Representation Learning", NeurIPS 2020 Spotlight.

Official PyTorch implementation of the paper: "Self-Supervised Relational Reasoning for Representation Learning" (2020), Patacchiola, M., and Storkey,

Massimiliano Patacchiola 135 Jan 03, 2023
ACV is a python library that provides explanations for any machine learning model or data.

ACV is a python library that provides explanations for any machine learning model or data. It gives local rule-based explanations for any model or data and different Shapley Values for tree-based mod

Salim Amoukou 85 Dec 27, 2022
Defocus Map Estimation and Deblurring from a Single Dual-Pixel Image

Defocus Map Estimation and Deblurring from a Single Dual-Pixel Image This repository is an implementation of the method described in the following pap

21 Dec 15, 2022
For holding anime-related object classification and detection models

Animesion An end-to-end framework for anime-related object classification, detection, segmentation, and other models. Update: 01/22/2020. Due to time-

Edwin Arkel Rios 72 Nov 30, 2022
Finite Element Analysis

FElupe - Finite Element Analysis FElupe is a Python 3.6+ finite element analysis package focussing on the formulation and numerical solution of nonlin

Andreas D. 20 Jan 09, 2023
Segmentation vgg16 fcn - cityscapes

VGGSegmentation Segmentation vgg16 fcn - cityscapes Priprema skupa skripta prepare_dataset_downsampled.py Iz slika cityscapesa izrezuje haubu automobi

6 Oct 24, 2020
Simple codebase for flexible neural net training

neural-modular Simple codebase for flexible neural net training. Allows for seamless exchange of models, dataset, and optimizers. Uses hydra for confi

Jannik Kossen 7 Apr 05, 2022
Joint project of the duo Hacker Ninjas

Project Smoothie Společný projekt dua Hacker Ninjas. První pokus o hříčku po třech týdnech učení se programování. Jakub Kolář e:\

Jakub Kolář 2 Jan 07, 2022
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.

PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.

DLR-RM 4.7k Jan 01, 2023
This repo contains the pytorch implementation for Dynamic Concept Learner (accepted by ICLR 2021).

DCL-PyTorch Pytorch implementation for the Dynamic Concept Learner (DCL). More details can be found at the project page. Framework Grounding Physical

Zhenfang Chen 31 Jan 06, 2023
A Python script that creates subtitles of a given length from text paragraphs that can be easily imported into any Video Editing software such as FinalCut Pro for further adjustments.

Text to Subtitles - Python This python file creates subtitles of a given length from text paragraphs that can be easily imported into any Video Editin

Dmytro North 9 Dec 24, 2022
Autoregressive Predictive Coding: An unsupervised autoregressive model for speech representation learning

Autoregressive Predictive Coding This repository contains the official implementation (in PyTorch) of Autoregressive Predictive Coding (APC) proposed

iamyuanchung 173 Dec 18, 2022
Fast, flexible and fun neural networks.

Brainstorm Discontinuation Notice Brainstorm is no longer being maintained, so we recommend using one of the many other,available frameworks, such as

IDSIA 1.3k Nov 21, 2022
PiRapGenerator - Make anyone rap the digits of pi

PiRapGenerator Make anyone rap the digits of pi (sample files are of Ted Nivison

7 Oct 02, 2022
This is Official implementation for "Pose-guided Feature Disentangling for Occluded Person Re-Identification Based on Transformer" in AAAI2022

PFD:Pose-guided Feature Disentangling for Occluded Person Re-identification based on Transformer This repo is the official implementation of "Pose-gui

Tao Wang 93 Dec 18, 2022
Teaches a student network from the knowledge obtained via training of a larger teacher network

Distilling-the-knowledge-in-neural-network Teaches a student network from the knowledge obtained via training of a larger teacher network This is an i

Abhishek Sinha 146 Dec 11, 2022