An implementation of WaveNet with fast generation

Overview

pytorch-wavenet

This is an implementation of the WaveNet architecture, as described in the original paper.

Features

  • Automatic creation of a dataset (training and validation/test set) from all sound files (.wav, .aiff, .mp3) in a directory
  • Efficient multithreaded data loading
  • Logging to TensorBoard (Training loss, validation loss, validation accuracy, parameter and gradient histograms, generated samples)
  • Fast generation, as introduced here

Requirements

  • python 3
  • pytorch 0.3
  • numpy
  • librosa
  • jupyter
  • tensorflow for TensorBoard logging

Demo

For an introduction on how to use this model, take a look at the WaveNet demo notebook. You can find audio clips generated by a simple trained model in the generated samples directory

Owner
Vincent Herrmann
Vincent Herrmann
Tβ€˜rex Park is a Youzan sponsored project. Offering Chinese NLP and image models pretrained from E-commerce datasets

Tβ€˜rex Park is a Youzan sponsored project. Offering Chinese NLP and image models pretrained from E-commerce datasets (product titles, images, comments, etc.).

55 Nov 22, 2022
TextAttack πŸ™ is a Python framework for adversarial attacks, data augmentation, and model training in NLP

TextAttack πŸ™ Generating adversarial examples for NLP models [TextAttack Documentation on ReadTheDocs] About β€’ Setup β€’ Usage β€’ Design About TextAttack

QData 2.2k Jan 03, 2023
BERT score for text generation

BERTScore Automatic Evaluation Metric described in the paper BERTScore: Evaluating Text Generation with BERT (ICLR 2020). News: Features to appear in

Tianyi 1k Jan 08, 2023
FastFormers - highly efficient transformer models for NLU

FastFormers FastFormers provides a set of recipes and methods to achieve highly efficient inference of Transformer models for Natural Language Underst

Microsoft 678 Jan 05, 2023
Abhijith Neil Abraham 2 Nov 05, 2021
State-of-the-art NLP through transformer models in a modular design and consistent APIs.

Trapper (Transformers wRAPPER) Trapper is an NLP library that aims to make it easier to train transformer based models on downstream tasks. It wraps h

Open Business Software Solutions 42 Sep 21, 2022
SNCSE: Contrastive Learning for Unsupervised Sentence Embedding with Soft Negative Samples

SNCSE SNCSE: Contrastive Learning for Unsupervised Sentence Embedding with Soft Negative Samples This is the repository for SNCSE. SNCSE aims to allev

Sense-GVT 59 Jan 02, 2023
Sequence Modeling with Structured State Spaces

Structured State Spaces for Sequence Modeling This repository provides implementations and experiments for the following papers. S4 Efficiently Modeli

HazyResearch 902 Jan 06, 2023
Code-autocomplete, a code completion plugin for Python

Code AutoComplete code-autocomplete, a code completion plugin for Python.

xuming 13 Jan 07, 2023
Baseline code for Korean open domain question answering(ODQA)

Open-Domain Question Answering(ODQA)λŠ” λ‹€μ–‘ν•œ μ£Όμ œμ— λŒ€ν•œ λ¬Έμ„œ μ§‘ν•©μœΌλ‘œλΆ€ν„° μžμ—°μ–΄ μ§ˆμ˜μ— λŒ€ν•œ 닡변을 μ°Ύμ•„μ˜€λŠ” taskμž…λ‹ˆλ‹€. μ΄λ•Œ μ‚¬μš©μž μ§ˆμ˜μ— λ‹΅λ³€ν•˜κΈ° μœ„ν•΄ μ£Όμ–΄μ§€λŠ” 지문이 λ”°λ‘œ μ‘΄μž¬ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€. λ”°λΌμ„œ 사전에 κ΅¬μΆ•λ˜μ–΄μžˆλŠ” Knowl

VUMBLEB 69 Nov 04, 2022
Contains the code and data for our #ICSE2022 paper titled as "CodeFill: Multi-token Code Completion by Jointly Learning from Structure and Naming Sequences"

CodeFill This repository contains the code for our paper titled as "CodeFill: Multi-token Code Completion by Jointly Learning from Structure and Namin

Software Analytics Lab 11 Oct 31, 2022
Kerberoast with ACL abuse capabilities

targetedKerberoast targetedKerberoast is a Python script that can, like many others (e.g. GetUserSPNs.py), print "kerberoast" hashes for user accounts

Shutdown 213 Dec 22, 2022
Ray-based parallel data preprocessing for NLP and ML.

Wrangl Ray-based parallel data preprocessing for NLP and ML. pip install wrangl # for latest pip install git+https://github.com/vzhong/wrangl See exa

Victor Zhong 33 Dec 27, 2022
[Preprint] Escaping the Big Data Paradigm with Compact Transformers, 2021

Compact Transformers Preprint Link: Escaping the Big Data Paradigm with Compact Transformers By Ali Hassani[1]*, Steven Walton[1]*, Nikhil Shah[1], Ab

SHI Lab 367 Dec 31, 2022
A framework for implementing federated learning

This is partly the reproduction of the paper of [Privacy-Preserving Federated Learning in Fog Computing](DOI: 10.1109/JIOT.2020.2987958. 2020)

DavidChen 46 Sep 23, 2022
A framework for evaluating Knowledge Graph Embedding Models in a fine-grained manner.

A framework for evaluating Knowledge Graph Embedding Models in a fine-grained manner.

NEC Laboratories Europe 13 Sep 08, 2022
The repository for the paper: Multilingual Translation via Grafting Pre-trained Language Models

Graformer The repository for the paper: Multilingual Translation via Grafting Pre-trained Language Models Graformer (also named BridgeTransformer in t

22 Dec 14, 2022
ETM - R package for Topic Modelling in Embedding Spaces

ETM - R package for Topic Modelling in Embedding Spaces This repository contains an R package called topicmodels.etm which is an implementation of ETM

bnosac 37 Nov 06, 2022
Under the hood working of transformers, fine-tuning GPT-3 models, DeBERTa, vision models, and the start of Metaverse, using a variety of NLP platforms: Hugging Face, OpenAI API, Trax, and AllenNLP

Transformers-for-NLP-2nd-Edition @copyright 2022, Packt Publishing, Denis Rothman Contact me for any question you have on LinkedIn Get the book on Ama

Denis Rothman 150 Dec 23, 2022
[AAAI 21] Curriculum Labeling: Revisiting Pseudo-Labeling for Semi-Supervised Learning

β—₯ Curriculum Labeling β—£ Revisiting Pseudo-Labeling for Semi-Supervised Learning Paola Cascante-Bonilla, Fuwen Tan, Yanjun Qi, Vicente Ordonez. In the

UVA Computer Vision 113 Dec 15, 2022