Like Dirt-Samples, but cleaned up

Overview

Clean-Samples

Like Dirt-Samples, but cleaned up, with clear provenance and license info (generally a permissive creative commons licence but check the metadata for specifics).

The bin/meta.py python script is a reference implementation that can make a '.cleanmeta' metadata file for your own sample pack folder. See below for how to use it and contribute a sample pack of your own.

If you want to use these outside the Tidal/SuperDirt/SuperCollider ecosystem you are very welcome. You're encouraged to join discussion in the github issue tracker so that we can develop a standard way to share and index/signpost these packs.

See /tidalcycles/sounds-repetition for an example sample pack which has two sets of samples in it.

How to contribute a sample pack

Please only contribute samples if you are happy to share them under a permissive license such as CC0 or a similar creative commons license.

If you are unfamiliar with the 'git' software, please create an issue here, with a short description of your samples and a link to them and someone should be along to help shortly.

If you are familiar with git and running python scripts (or happy to learn), please follow the below instructions. This is all new - if anything is unclear please create an issue, thanks!

  1. Get your samples together in .wav format, editing them if necessary (see below for advice).

  2. Create a new repository. This isn't essential, but consider putting 'sounds-' in front of its name, e.g. 'sounds-303bass' for your 303 bass samples.

  3. Add your samples to the repository. For an example of how to organise them, see this sample pack: tidalcycles/sounds-repetition, which has two sets of samples, with a subfolder for each.

  4. Create a '.cleanmeta' metadata file for each subfolder. Again, see tidalcycles/sounds-repetition for examples. There is a python script bin/meta.py which can generate the metadata file for you, run it without parameters for help. Here is an example commandline, that was used to generate repetition.cleanmeta:

    ../Clean-Samples/bin/meta.py --maintainer alex --email [email protected] --copyright "(c) 2021 Alex McLean" --license CC0 --provenance "Various dodgy speech synths" --shortname repetition --sample-subfolder repetition/ --write .
    

    After generating the file, edit it with a text editor to fill in any missing info.

  5. When ready, add te URL of your repository to the https://github.com/tidalcycles/Clean-Samples/blob/main/Clean-Samples.quark for the Clean-Samples quark) in a pull request. You could also add it to the SuperCollider quarks database, or we can do that for you if you prefer, so that we can accept the PR to Clean-Samples once it's accepted as a quark.

Advice for preparing samples

You can use free/open source software like audacity for editing samples.

As a minimum, be sure to trim any silence from beginning/end of the samples, and that the start and end of the sample is at zero to avoid clicks (you might need to fade in / fade out by a tiny amount to achieve this).

Consider adjusting the volume/loudness too, for example normalising to -1.0db - but this is very subjective and will depend on the nature of the samples and the music they're used with. For example distorted gabba samples are intended to be very loud, and a whisper is intended to sound silent. The average non-percussive sample should be around -23dB RMS. Samples shouldn't exceed 0dB true peak. EBU recommends -1dBTP at 4x-oversampling. Samples generally shouldn't have DC offset, although e.g. some kick drum samples naturally have non-zero mean.

For more advice, you could join the discussion here.

Thanks!

Owner
TidalCycles
Live coding environment for making patterns
TidalCycles
The pytorch implementation of DG-Font: Deformable Generative Networks for Unsupervised Font Generation

DG-Font: Deformable Generative Networks for Unsupervised Font Generation The source code for 'DG-Font: Deformable Generative Networks for Unsupervised

130 Dec 05, 2022
Predict the latency time of the deep learning models

Deep Neural Network Prediction Step 1. Genernate random parameters and Run them sequentially : $ python3 collect_data.py -gp -ep -pp -pl pooling -num

QAQ 1 Nov 12, 2021
Repository for code and dataset for our EMNLP 2021 paper - “So You Think You’re Funny?”: Rating the Humour Quotient in Standup Comedy.

AI-OpenMic Dataset The dataset is available for download via the follwing link. Repository for code and dataset for our EMNLP 2021 paper - “So You Thi

6 Oct 26, 2022
Code + pre-trained models for the paper Keeping Your Eye on the Ball Trajectory Attention in Video Transformers

Motionformer This is an official pytorch implementation of paper Keeping Your Eye on the Ball: Trajectory Attention in Video Transformers. In this rep

Facebook Research 192 Dec 23, 2022
A basic neural network for image segmentation.

Unet_erythema_detection A basic neural network for image segmentation. 前期准备 1.在logs文件夹中下载h5权重文件,百度网盘链接在logs文件夹中 2.将所有原图 放置在“/dataset_1/JPEGImages/”文件夹

1 Jan 16, 2022
Code repository accompanying the paper "On Adversarial Robustness: A Neural Architecture Search perspective"

On Adversarial Robustness: A Neural Architecture Search perspective Preparation: Clone the repository: https://github.com/tdchaitanya/nas-robustness.g

Chaitanya Devaguptapu 4 Nov 10, 2022
SeMask: Semantically Masked Transformers for Semantic Segmentation.

SeMask: Semantically Masked Transformers Jitesh Jain, Anukriti Singh, Nikita Orlov, Zilong Huang, Jiachen Li, Steven Walton, Humphrey Shi This repo co

Picsart AI Research (PAIR) 186 Dec 30, 2022
Official PyTorch implementation of PICCOLO: Point-Cloud Centric Omnidirectional Localization (ICCV 2021)

Official PyTorch implementation of PICCOLO: Point-Cloud Centric Omnidirectional Localization (ICCV 2021)

16 Nov 19, 2022
Source code for Transformer-based Multi-task Learning for Disaster Tweet Categorisation (UCD's participation in TREC-IS 2020A, 2020B and 2021A).

Source code for "UCD participation in TREC-IS 2020A, 2020B and 2021A". *** update at: 2021/05/25 This repo so far relates to the following work: Trans

Congcong Wang 4 Oct 19, 2021
Python package for multiple object tracking research with focus on laboratory animals tracking.

motutils is a Python package for multiple object tracking research with focus on laboratory animals tracking. Features loads: MOTChallenge CSV, sleap

Matěj Šmíd 2 Sep 05, 2022
Lucid Sonic Dreams syncs GAN-generated visuals to music.

Lucid Sonic Dreams Lucid Sonic Dreams syncs GAN-generated visuals to music. By default, it uses NVLabs StyleGAN2, with pre-trained models lifted from

731 Jan 02, 2023
A Probabilistic End-To-End Task-Oriented Dialog Model with Latent Belief States towards Semi-Supervised Learning

LABES This is the code for EMNLP 2020 paper "A Probabilistic End-To-End Task-Oriented Dialog Model with Latent Belief States towards Semi-Supervised L

17 Sep 28, 2022
Vector Neurons: A General Framework for SO(3)-Equivariant Networks

Vector Neurons: A General Framework for SO(3)-Equivariant Networks Created by Congyue Deng, Or Litany, Yueqi Duan, Adrien Poulenard, Andrea Tagliasacc

Congyue Deng 332 Dec 29, 2022
Advancing mathematics by guiding human intuition with AI

Advancing mathematics by guiding human intuition with AI This repo contains two colab notebooks which accompany the paper, available online at https:/

DeepMind 315 Dec 26, 2022
Datasets, Transforms and Models specific to Computer Vision

vision Datasets, Transforms and Models specific to Computer Vision Installation First install the nightly version of OneFlow python3 -m pip install on

OneFlow 68 Dec 07, 2022
Implementation of the state-of-the-art vision transformers with tensorflow

ViT Tensorflow This repository contains the tensorflow implementation of the state-of-the-art vision transformers (a category of computer vision model

Mohammadmahdi NouriBorji 2 Mar 16, 2022
Unsupervised Domain Adaptation for Nighttime Aerial Tracking (CVPR2022)

Unsupervised Domain Adaptation for Nighttime Aerial Tracking (CVPR2022) Junjie Ye, Changhong Fu, Guangze Zheng, Danda Pani Paudel, and Guang Chen. Uns

Intelligent Vision for Robotics in Complex Environment 91 Dec 30, 2022
A Real-Time-Strategy game for Deep Learning research

Description DeepRTS is a high-performance Real-TIme strategy game for Reinforcement Learning research. It is written in C++ for performance, but provi

Centre for Artificial Intelligence Research (CAIR) 156 Dec 19, 2022
This repo is to be freely used by ML devs to check the GAN performances without coding from scratch.

GANs for Fun Created because I can! GOAL The goal of this repo is to be freely used by ML devs to check the GAN performances without coding from scrat

Sagnik Roy 13 Jan 26, 2022
This code provides various models combining dilated convolutions with residual networks

Overview This code provides various models combining dilated convolutions with residual networks. Our models can achieve better performance with less

Fisher Yu 1.1k Dec 30, 2022