A GPU-optional modular synthesizer in pytorch, 16200x faster than realtime, for audio ML researchers.

Overview

torchsynth

The fastest synth in the universe.

Introduction

torchsynth is based upon traditional modular synthesis written in pytorch. It is GPU-optional and differentiable.

Most synthesizers are fast in terms of latency. torchsynth is fast in terms of throughput. It synthesizes audio 16200x faster than realtime (714MHz) on a single GPU. This is of particular interest to audio ML researchers seeking large training corpora.

Additionally, all synthesized audio is returned with the underlying latent parameters used for generating the corresponding audio. This is useful for multi-modal training regimes.

Installation

pip3 install torchsynth

Note that torchsynth requires PyTorch version 1.8 or greater.

Listen

If you'd like to hear torchsynth, check out synth1K1, a dataset of 1024 4-second sounds rendered from the Voice synthesizer, or listen on SoundCloud.

Citation

If you use this work in your research, please cite:

@inproceedings{turian2021torchsynth,
	title        = {One Billion Audio Sounds from {GPU}-enabled Modular Synthesis},
	author       = {Joseph Turian and Jordie Shier and George Tzanetakis and Kirk McNally and Max Henry},
	year         = 2021,
	month        = Sep,
	booktitle    = {Proceedings of the 23rd International Conference on Digital Audio Effects (DAFx2020)},
	location     = {Vienna, Austria}
}
Comments
  • Device modifications

    Device modifications

    Some updates to make sure things are on the correct device.

    Things I learned while doing this -- having 0d tensors on the GPU does not necessarily lead to a speed up. I think creating a scalar tensor on the GPU is slower than just using a native Number type for most operations and comparisons. Especially if those tensors are different dtypes. Having parameter ranges on the GPU did not improve performance and actually resulted in worse performance (~ 15ms on batch size of 64) . Same with have the buffer size and batch size on the GPU. Having sample rate as a float did help a little bit though. Some of the assertions are slow. Especially the one I marked in the Parameters.

    opened by jorshi 8
  • Updating torch.range to torch.arange

    Updating torch.range to torch.arange

    torch.range is now deprecated and should be replaced with torch.arange, which is consistent with pythons built-in range. torch.range also was producing errors with high-valued ranges, see #377

    Merge #380 into this first.

    opened by jorshi 7
  • fixed buffer_size for renders

    fixed buffer_size for renders

    Introducing buffer_size, which is also a global default BUFFER_SIZE.

    Now SynthModule and TorchSynthModule have this property, and a method to_buffer_size(). I've placed this at the return for every foward/call:

    def forward # or npyforward...
      # ...
      out_ = # previous output
      return self.to_buffer_size(out_)
    

    There might be a cleaner way of doing this, like making a "pre-forward" method, and then having "forward" always be: return self.to_buffer_size(self.pre_forward) or some such thing, but that seems confusing and over-engineered.

    I fixed up all of module.py, and adapted all of torchmodule.py that currently exists.

    Lmk.

    opened by maxsolomonhenry 7
  • Torch ADSR + SineVCO

    Torch ADSR + SineVCO

    This PR is diff'ed against #37 for better understanding.

    In this PR, I am converting ADSR and SineVCO to torch. I make sure that torch and numpy modules give the same values on forward. Gradients are computed in example.py but not in unit tests. Why are we getting nan gradient for alpha?

    There's a lot to hate in this port.

    ~~In general, before we address the nitty-gritty (below), I think the biggest issue to consider is that I don't know if standard ADSR will be differentiable on the adsr parameters, since it involves a lot of discontinuities and padding. I think we have two options~~ ~~1) don't make anything differentiable and just focus on rendering speed. this is quite lame, of course~~ ~~2) work on the subproblem of just creating a differentiable ADSR. ignore our complicated abstractions for now, and just focus on differentiable ADSR, maybe to minimize a simple l2 distance. maybe we have to use splines?~~

    Update: It turns out I can differentiate through ADSR parameters? Except alpha which is nan. Why? Check out example.py

    Here's some TODO:

    • examples.py should have a torch section doing the exact thing as above, but with the torch versions.
    • Internally, we want to be storing all nn.Parameter in the 0/1 range, not the human readable range. Since that is what will be used for backprop.
    • The modparameter abstraction needs to be cleaned up, since each nn.Parameter stores its own value. I think the cleanest thing is a TorchModParameter which inherits from nn.Parameter but adds some helper methods around it.
    • torch.linspace doesn't have an endpoint param.
    • gradients should be a unit test
    opened by turian 7
  • Randomize modparameter

    Randomize modparameter

    This allows you to create randomized parameters for the synth. I'd like this for unit-testing numpy vs torch.

    Depends upon #33

    What's weird is that the note_on_duration is sometimes disregarded and you get like 16 second samples. Is this because of the one-hit thing @maxsolomonhenry ? That seems like a problem to me. You should be able to hillclimb the synth by setting random params to try to get a similar audio of the same duration :\

    opened by turian 6
  • added convenience import

    added convenience import

    You can take this or leave it. I've been exploring a bit the python package structure.

    in essence, this allows for the example.py import:

    from ddspdrum.module import ADSR # etc...

    to become:

    from ddrpdrum import ADSR # etc...

    opened by maxsolomonhenry 6
  • SynthConfig as non-Tensors

    SynthConfig as non-Tensors

    https://github.com/turian/torchsynth/pull/267https://github.com/turian/torchsynth/pull/267 should be merged first for a smaller diff

    I am trying to simplify the synth configuration, and it will be a sequence of linear PRs. Here is one that turns most of the configs back into Python native values.

    @jorshi can you profile quickly to make sure this is okay before we merge?

    opened by turian 5
  • Fm vco

    Fm vco

    Made FmVCO class, which basically has to process the modulation signal in a slightly different way. Typically the mod signal is applied in midi-space (log frequency), but FM operates on modulations in Hz space. Also changed the modulation depth to reflect the classic 'modulation index' of FM literature. It's a bit more intuitive this way.

    Had to slightly refactor VCO to make this smooth.

    Note, this is only work on the numpy modules. torchmodule.py would have to be updated accordingly.

    opened by maxsolomonhenry 5
  • TorchParameter

    TorchParameter

    Parameters for TorchSynthModules that have an internal range from 0 to 1 and can hold a ParameterRange object to convert to and from a user specified range

    opened by jorshi 5
  • Profiling script

    Profiling script

    Not sure if we want to include this, but this has been mega helpful for me in profiling. Also used this with line profiler to look at line-by-line profiles https://github.com/pyutils/line_profiler

    opened by jorshi 4
  • Reproducibility Issue

    Reproducibility Issue

    If you received an Error or Warning regarding reproducibility while using torchsynth please leave a comment here with details about your CPU architecture and what random results you got.

    opened by jorshi 4
  • [Snyk] Fix for 2 vulnerabilities

    [Snyk] Fix for 2 vulnerabilities

    This PR was automatically created by Snyk using the credentials of a real user.


    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • docs/requirements.txt
    โš ๏ธ Warning
    unofficial-pt-lightning-sphinx-theme 0.0.27.4 requires sphinx, which is not installed.
    sphinx-rtd-theme 1.1.1 requires sphinx, which is not installed.
    librosa 0.7.2 requires scikit-learn, which is not installed.
    librosa 0.7.2 requires resampy, which is not installed.
    librosa 0.7.2 requires numba, which is not installed.
    ipython 5.10.0 requires pygments, which is not installed.
    ipython 5.10.0 requires simplegeneric, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | 551/1000
    Why? Recently disclosed, Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-SETUPTOOLS-3180412 | setuptools:
    39.0.1 -> 65.5.1
    | No | No Known Exploit medium severity | 551/1000
    Why? Recently disclosed, Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-WHEEL-3180413 | wheel:
    0.30.0 -> 0.38.0
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: ๐Ÿง View latest project report

    ๐Ÿ›  Adjust project settings

    ๐Ÿ“š Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    ๐Ÿฆ‰ Regular Expression Denial of Service (ReDoS) ๐Ÿฆ‰ Regular Expression Denial of Service (ReDoS)

    opened by turian 1
  • [Snyk] Fix for 2 vulnerabilities

    [Snyk] Fix for 2 vulnerabilities

    This PR was automatically created by Snyk using the credentials of a real user.


    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • requirements.txt
    โš ๏ธ Warning
    unofficial-pt-lightning-sphinx-theme 0.0.27.4 requires sphinx, which is not installed.
    sphinx-rtd-theme 1.1.1 requires sphinx, which is not installed.
    librosa 0.7.2 requires scikit-learn, which is not installed.
    librosa 0.7.2 requires resampy, which is not installed.
    librosa 0.7.2 requires numba, which is not installed.
    ipython 5.10.0 requires pygments, which is not installed.
    ipython 5.10.0 requires simplegeneric, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | 551/1000
    Why? Recently disclosed, Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-SETUPTOOLS-3180412 | setuptools:
    39.0.1 -> 65.5.1
    | No | No Known Exploit medium severity | 551/1000
    Why? Recently disclosed, Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-WHEEL-3180413 | wheel:
    0.30.0 -> 0.38.0
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: ๐Ÿง View latest project report

    ๐Ÿ›  Adjust project settings

    ๐Ÿ“š Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    ๐Ÿฆ‰ Regular Expression Denial of Service (ReDoS) ๐Ÿฆ‰ Regular Expression Denial of Service (ReDoS)

    opened by turian 1
  • Bump black from 22.6.0 to 22.12.0

    Bump black from 22.6.0 to 22.12.0

    Bumps black from 22.6.0 to 22.12.0.

    Release notes

    Sourced from black's releases.

    22.12.0

    Preview style

    • Enforce empty lines before classes and functions with sticky leading comments (#3302)
    • Reformat empty and whitespace-only files as either an empty file (if no newline is present) or as a single newline character (if a newline is present) (#3348)
    • Implicitly concatenated strings used as function args are now wrapped inside parentheses (#3307)
    • Correctly handle trailing commas that are inside a line's leading non-nested parens (#3370)

    Configuration

    • Fix incorrectly applied .gitignore rules by considering the .gitignore location and the relative path to the target file (#3338)
    • Fix incorrectly ignoring .gitignore presence when more than one source directory is specified (#3336)

    Parser

    • Parsing support has been added for walruses inside generator expression that are passed as function args (for example, any(match := my_re.match(text) for text in texts)) (#3327).

    Integrations

    • Vim plugin: Optionally allow using the system installation of Black via let g:black_use_virtualenv = 0(#3309)

    22.10.0

    Highlights

    • Runtime support for Python 3.6 has been removed. Formatting 3.6 code will still be supported until further notice.

    Stable style

    • Fix a crash when # fmt: on is used on a different block level than # fmt: off (#3281)

    Preview style

    ... (truncated)

    Changelog

    Sourced from black's changelog.

    22.12.0

    Preview style

    • Enforce empty lines before classes and functions with sticky leading comments (#3302)
    • Reformat empty and whitespace-only files as either an empty file (if no newline is present) or as a single newline character (if a newline is present) (#3348)
    • Implicitly concatenated strings used as function args are now wrapped inside parentheses (#3307)
    • Correctly handle trailing commas that are inside a line's leading non-nested parens (#3370)

    Configuration

    • Fix incorrectly applied .gitignore rules by considering the .gitignore location and the relative path to the target file (#3338)
    • Fix incorrectly ignoring .gitignore presence when more than one source directory is specified (#3336)

    Parser

    • Parsing support has been added for walruses inside generator expression that are passed as function args (for example, any(match := my_re.match(text) for text in texts)) (#3327).

    Integrations

    • Vim plugin: Optionally allow using the system installation of Black via let g:black_use_virtualenv = 0(#3309)

    22.10.0

    Highlights

    • Runtime support for Python 3.6 has been removed. Formatting 3.6 code will still be supported until further notice.

    Stable style

    • Fix a crash when # fmt: on is used on a different block level than # fmt: off (#3281)

    ... (truncated)

    Commits
    • 2ddea29 Prepare release 22.12.0 (#3413)
    • 5b1443a release: skip bad macos wheels for now (#3411)
    • 9ace064 Bump peter-evans/find-comment from 2.0.1 to 2.1.0 (#3404)
    • 19c5fe4 Fix CI with latest flake8-bugbear (#3412)
    • d4a8564 Bump sphinx-copybutton from 0.5.0 to 0.5.1 in /docs (#3390)
    • 2793249 Wordsmith current_style.md (#3383)
    • d97b789 Remove whitespaces of whitespace-only files (#3348)
    • c23a5c1 Clarify that Black runs with --safe by default (#3378)
    • 8091b25 Correctly handle trailing commas that are inside a line's leading non-nested ...
    • ffaaf48 Compare each .gitignore found with an appropiate relative path (#3338)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
  • [Snyk] Security upgrade protobuf from 3.20.1 to 3.20.2

    [Snyk] Security upgrade protobuf from 3.20.1 to 3.20.2

    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • docs/requirements.txt
    โš ๏ธ Warning
    unofficial-pt-lightning-sphinx-theme 0.0.27.4 requires sphinx, which is not installed.
    sphinx-rtd-theme 1.1.1 requires sphinx, which is not installed.
    librosa 0.7.2 requires scikit-learn, which is not installed.
    librosa 0.7.2 requires resampy, which is not installed.
    librosa 0.7.2 requires numba, which is not installed.
    ipython 5.10.0 requires simplegeneric, which is not installed.
    ipython 5.10.0 requires pygments, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | 499/1000
    Why? Has a fix available, CVSS 5.7 | Denial of Service (DoS)
    SNYK-PYTHON-PROTOBUF-3031740 | protobuf:
    3.20.1 -> 3.20.2
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: ๐Ÿง View latest project report

    ๐Ÿ›  Adjust project settings

    ๐Ÿ“š Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    ๐Ÿฆ‰ Learn about vulnerability in an interactive lesson of Snyk Learn.

    opened by snyk-bot 1
  • [Snyk] Security upgrade protobuf from 3.20.1 to 3.20.2

    [Snyk] Security upgrade protobuf from 3.20.1 to 3.20.2

    This PR was automatically created by Snyk using the credentials of a real user.


    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • requirements.txt
    โš ๏ธ Warning
    unofficial-pt-lightning-sphinx-theme 0.0.27.4 requires sphinx, which is not installed.
    sphinx-rtd-theme 1.1.1 requires sphinx, which is not installed.
    librosa 0.7.2 requires scikit-learn, which is not installed.
    librosa 0.7.2 requires resampy, which is not installed.
    librosa 0.7.2 requires numba, which is not installed.
    ipython 5.10.0 requires simplegeneric, which is not installed.
    ipython 5.10.0 requires pygments, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | 499/1000
    Why? Has a fix available, CVSS 5.7 | Denial of Service (DoS)
    SNYK-PYTHON-PROTOBUF-3031740 | protobuf:
    3.20.1 -> 3.20.2
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: ๐Ÿง View latest project report

    ๐Ÿ›  Adjust project settings

    ๐Ÿ“š Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    ๐Ÿฆ‰ Learn about vulnerability in an interactive lesson of Snyk Learn.

    opened by turian 1
  • [Snyk] Fix for 2 vulnerabilities

    [Snyk] Fix for 2 vulnerabilities

    This PR was automatically created by Snyk using the credentials of a real user.


    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • docs/requirements.txt
    โš ๏ธ Warning
    unofficial-pt-lightning-sphinx-theme 0.0.27.4 requires sphinx, which is not installed.
    sphinx-rtd-theme 1.1.1 requires sphinx, which is not installed.
    scipy 1.2.3 requires numpy, which is not installed.
    pytest-cov 2.12.1 requires coverage, which is not installed.
    matplotlib 2.2.5 requires numpy, which is not installed.
    librosa 0.7.2 requires numpy, which is not installed.
    librosa 0.7.2 requires scikit-learn, which is not installed.
    librosa 0.7.2 requires resampy, which is not installed.
    librosa 0.7.2 requires numba, which is not installed.
    ipython 5.10.0 requires simplegeneric, which is not installed.
    ipython 5.10.0 requires pygments, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- low severity | 441/1000
    Why? Recently disclosed, Has a fix available, CVSS 3.1 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-SETUPTOOLS-3113904 | setuptools:
    39.0.1 -> 65.5.1
    | No | No Known Exploit medium severity | 551/1000
    Why? Recently disclosed, Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-WHEEL-3092128 | wheel:
    0.30.0 -> 0.38.0
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: ๐Ÿง View latest project report

    ๐Ÿ›  Adjust project settings

    ๐Ÿ“š Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    ๐Ÿฆ‰ Regular Expression Denial of Service (ReDoS) ๐Ÿฆ‰ Regular Expression Denial of Service (ReDoS)

    opened by turian 1
Releases(v1.0.2)
  • v1.0.2(Aug 19, 2022)

    This update includes some bug fixes and an update to the Signal class to enable checkpointing.

    What's Changed

    • Adding the drum nebula to docs by @jorshi in https://github.com/torchsynth/torchsynth/pull/374
    • Sphinx fixs by @turian in https://github.com/torchsynth/torchsynth/pull/379
    • Codecov Action Fix by @jorshi in https://github.com/torchsynth/torchsynth/pull/380
    • Updating torch.range to torch.arange by @jorshi in https://github.com/torchsynth/torchsynth/pull/378 - torch.range, which is deprecated, was producing incorrect values for larger batch_ids passed into a synth voice.
    • Fix floordiv by @turian in https://github.com/torchsynth/torchsynth/pull/382
    • Add new_empty to Signal by @turian in https://github.com/torchsynth/torchsynth/pull/384 - this enables deepcopy on torchsynth Signals and allows for checkpointing

    Full Changelog: https://github.com/torchsynth/torchsynth/compare/v1.0.1...v1.0.2

    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Jun 29, 2021)

  • v1.0.0(Apr 27, 2021)

    • All AbstractSynth now return multi-modal tuples: (audio batch, parameter batch, is_train batch)
    • Batch sizes that are multiples of 32 are now supported for any reproducible output. (128 is still the default.)
    • More detailed documentation, including documentation fix proposed by @daisukelab.
    • Default voice nebula added, as well as a drum nebula.
    • modulation signal input on VCO and LFO is now optional
    Source code(tar.gz)
    Source code(zip)
  • v0.9.2(Apr 13, 2021)

  • v0.9.1(Apr 11, 2021)

Owner
torchsynth
The fastest synthesizer in the universe
torchsynth
Final Project for the CS238: Decision Making Under Uncertainty course at Stanford University in Autumn '21.

Final Project for the CS238: Decision Making Under Uncertainty course at Stanford University in Autumn '21. We optimized wind turbine placement in a wind farm, subject to wake effects, using Q-learni

Manasi Sharma 2 Sep 27, 2022
Differentiable rasterization applied to 3D model simplification tasks

nvdiffmodeling Differentiable rasterization applied to 3D model simplification tasks, as described in the paper: Appearance-Driven Automatic 3D Model

NVIDIA Research Projects 336 Dec 30, 2022
Machine learning library for fast and efficient Gaussian mixture models

This repository contains code which implements the Stochastic Gaussian Mixture Model (S-GMM) for event-based datasets Dependencies CMake Premake4 Blaz

Omar Oubari 1 Dec 19, 2022
An implementation of the "Attention is all you need" paper without extra bells and whistles, or difficult syntax

Simple Transformer An implementation of the "Attention is all you need" paper without extra bells and whistles, or difficult syntax. Note: The only ex

29 Jun 16, 2022
This repository contains the database and code used in the paper Embedding Arithmetic for Text-driven Image Transformation

This repository contains the database and code used in the paper Embedding Arithmetic for Text-driven Image Transformation (Guillaume Couairon, Holger

Meta Research 31 Oct 17, 2022
Official implementation of the paper Label-Efficient Semantic Segmentation with Diffusion Models

Label-Efficient Semantic Segmentation with Diffusion Models Official implementation of the paper Label-Efficient Semantic Segmentation with Diffusion

Yandex Research 355 Jan 06, 2023
Source code for "Progressive Transformers for End-to-End Sign Language Production" (ECCV 2020)

Progressive Transformers for End-to-End Sign Language Production Source code for "Progressive Transformers for End-to-End Sign Language Production" (B

58 Dec 21, 2022
Automatic number plate recognition using tech: Yolo, OCR, Scene text detection, scene text recognation, flask, torch

Automatic Number Plate Recognition Automatic Number Plate Recognition (ANPR) is the process of reading the characters on the plate with various optica

Meftun AKARSU 52 Dec 22, 2022
Simple Baselines for Human Pose Estimation and Tracking

Simple Baselines for Human Pose Estimation and Tracking News Our new work High-Resolution Representations for Labeling Pixels and Regions is available

Microsoft 2.7k Jan 05, 2023
Visualizer for neural network, deep learning, and machine learning models

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX (.onnx, .pb, .pbtxt), Keras (.h5, .keras), Tens

Lutz Roeder 21k Jan 06, 2023
Supplemental learning materials for "Fourier Feature Networks and Neural Volume Rendering"

Fourier Feature Networks and Neural Volume Rendering This repository is a companion to a lecture given at the University of Cambridge Engineering Depa

Matthew A Johnson 133 Dec 26, 2022
Ladder Variational Autoencoders (LVAE) in PyTorch

Ladder Variational Autoencoders (LVAE) PyTorch implementation of Ladder Variational Autoencoders (LVAE) [1]: where the variational distributions q at

Andrea Dittadi 63 Dec 22, 2022
BirdCLEF 2021 - Birdcall Identification 4th place solution

BirdCLEF 2021 - Birdcall Identification 4th place solution My solution detail kaggle discussion Inference Notebook (best submission) Environment Use K

tattaka 42 Jan 02, 2023
Contains source code for the winning solution of the xView3 challenge

Winning Solution for xView3 Challenge This repository contains source code and pretrained models for my (Eugene Khvedchenya) solution to xView 3 Chall

Eugene Khvedchenya 51 Dec 30, 2022
Optimus: the first large-scale pre-trained VAE language model

Optimus: the first pre-trained Big VAE language model This repository contains source code necessary to reproduce the results presented in the EMNLP 2

314 Dec 19, 2022
Keras implementation of "One pixel attack for fooling deep neural networks" using differential evolution on Cifar10 and ImageNet

One Pixel Attack How simple is it to cause a deep neural network to misclassify an image if an attacker is only allowed to modify the color of one pix

Dan Kondratyuk 1.2k Dec 26, 2022
Implรฉmentation en pyhton de l'article Depixelizing pixel art de Johannes Kopf et Dani Lischinski

Implรฉmentation en pyhton de l'article Depixelizing pixel art de Johannes Kopf et Dani Lischinski

TableauBits 3 May 29, 2022
The official implementation of the Hybrid Self-Attention NEAT algorithm

PUREPLES - Pure Python Library for ES-HyperNEAT About This is a library of evolutionary algorithms with a focus on neuroevolution, implemented in pure

Adrian Westh 91 Dec 12, 2022
PyTorch implementation of the paper Deep Networks from the Principle of Rate Reduction

Deep Networks from the Principle of Rate Reduction This repository is the official PyTorch implementation of the paper Deep Networks from the Principl

459 Dec 27, 2022
Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation

DynaBOA Code repositoty for the paper: Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation Shanyan Guan, Jingwei Xu, Michell

198 Dec 29, 2022