The ARCA23K baseline system

Overview

ARCA23K Baseline System

This is the source code for the baseline system associated with the ARCA23K dataset. Details about ARCA23K and the baseline system can be found in our DCASE2021 paper [1].

Requirements

This software requires Python >=3.8. To install the dependencies, run:

poetry install

or:

pip install -r requirements.txt

You are also free to use another package manager (e.g. Conda).

The ARCA23K and FSD50K datasets are required too. For convenience, bash scripts are provided to download the datasets automatically. The dependencies are bash, curl, and unzip. Simply run the following command from the root directory of the project:

$ scripts/download_arca23k.sh
$ scripts/download_fsd50k.sh

This will download the datasets to a directory called _datasets/. When running the software, the --arca23k_dir and --fsd50k_dir options (refer to the Usage section) can be used to specify the location of the datasets. This is only necessary if the dataset paths are different from the default.

Usage

The general usage pattern is:

python <script> [-f PATH] <args...> [options...]

The command-line options can also be specified in configuration files. The path of a configuration file can be specified to the program using the --config_file (or -f) command-line option. This option can be used multiple times. Options that are passed in the command-line override those in the config file(s). See default.ini for an example of a config file. Note that default.ini does not need to be specified in the command line and should not be modified.

Training

To train a model, run:

python baseline/train.py DATASET [-f FILE] [--experiment_id ID] [--work_dir DIR] [--arca23k_dir DIR] [--fsd50k_dir DIR] [--frac NUM] [--sample_rate NUM] [--block_length NUM] [--hop_length NUM] [--features SPEC] [--cache_features BOOL] [--model {vgg9a,vgg11a}] [--weights_path PATH] [--label_noise DICT] [--n_epochs N] [--batch_size N] [--lr NUM] [--lr_scheduler SPEC] [--partition SPEC] [--seed N] [--cuda BOOL] [--n_workers N] [--overwrite BOOL]

The DATASET argument accepts the following values:

  • arca23k - Train using the ARCA23K dataset.
  • arca23k-fsd - Train using the ARCA23K-FSD dataset.
  • mixed-p - Train using a mixture of ARCA23K and ARCA23K-FSD. Replace p with a fraction that represents the percentage of ARCA23K examples to be present in the training set.

The --experiment_id option is used to differentiate experiments. It determines where the output files are saved relative to the path given by the --work_dir option. When running multiple trials, either use the --seed option to specify different random seeds or set it to a negative number to disable setting the random seed. Otherwise, the learned models will be identical across different trials.

Example:

python baseline/train.py arca23k --experiment_id my_experiment

Prediction

To compute predictions, run:

python baseline/predict.py DATASET SUBSET [-f FILE] [--experiment_id ID] [--work_dir DIR] [--arca23k_dir DIR] [--fsd50k_dir DIR] [--output_name FILE_NAME] [--clean BOOL] [--sample_rate NUM] [--block_length NUM] [--features SPEC] [--cache_features BOOL] [--weights_path PATH] [--batch_size N] [--partition SPEC] [--n_workers N] [--seed N] [--cuda BOOL]

The SUBSET argument must be set to either training, validation, or test.

Example:

python baseline/predict.py arca23k test --experiment_id my_experiment

Evaluation

To evaluate the predictions, run:

python baseline/evaluate.py DATASET SUBSET [-f FILE] [--experiment_id LIST] [--work_dir DIR] [--arca23k_dir DIR] [--fsd50k_dir DIR] [--output_name FILE_NAME] [--cached BOOL]

The SUBSET argument must be set to either training, validation, or test.

Example:

python baseline/evaluate.py arca23k test --experiment_id my_experiment

Citing

If you wish to cite this work, please cite the following paper:

[1] T. Iqbal, Y. Cao, A. Bailey, M. D. Plumbley, and W. Wang, “ARCA23K: An audio dataset for investigating open-set label noise”, in Proceedings of the Detection and Classification of Acoustic Scenes and Events 2021 Workshop (DCASE2021), 2021, Barcelona, Spain, pp. 201–205.

BibTeX:

@inproceedings{Iqbal2021,
    author = {Iqbal, T. and Cao, Y. and Bailey, A. and Plumbley, M. D. and Wang, W.},
    title = {{ARCA23K}: An audio dataset for investigating open-set label noise},
    booktitle = {Proceedings of the Detection and Classification of Acoustic Scenes and Events 2021 Workshop (DCASE2021)},
    pages = {201--205},
    year = {2021},
    address = {Barcelona, Spain},
}
potpourri3d - An invigorating blend of 3D geometry tools in Python.

A Python library of various algorithms and utilities for 3D triangle meshes and point clouds. Managed by Nicholas Sharp, with new tools added lazily as needed. Currently, mainly bindings to C++ tools

Nicholas Sharp 295 Jan 05, 2023
Python scripts form performing stereo depth estimation using the CoEx model in ONNX.

ONNX-CoEx-Stereo-Depth-estimation Python scripts form performing stereo depth estimation using the CoEx model in ONNX. Stereo depth estimation on the

Ibai Gorordo 8 Dec 29, 2022
Official PyTorch implementation of MAAD: A Model and Dataset for Attended Awareness

MAAD: A Model for Attended Awareness in Driving Install // Datasets // Training // Experiments // Analysis // License Official PyTorch implementation

7 Oct 16, 2022
Near-Duplicate Video Retrieval with Deep Metric Learning

Near-Duplicate Video Retrieval with Deep Metric Learning This repository contains the Tensorflow implementation of the paper Near-Duplicate Video Retr

2 Jan 24, 2022
The Simplest DCGAN Implementation

DCGAN in TensorLayer This is the TensorLayer implementation of Deep Convolutional Generative Adversarial Networks. Looking for Text to Image Synthesis

TensorLayer Community 310 Dec 13, 2022
Code for Understanding Pooling in Graph Neural Networks

Select, Reduce, Connect This repository contains the code used for the experiments of: "Understanding Pooling in Graph Neural Networks" Setup Install

Daniele Grattarola 37 Dec 13, 2022
A new version of the CIDACS-RL linkage tool suitable to a cluster computing environment.

Fully Distributed CIDACS-RL The CIDACS-RL is a brazillian record linkage tool suitable to integrate large amount of data with high accuracy. However,

Robespierre Pita 5 Nov 04, 2022
Replication package for the manuscript "Using Personality Detection Tools for Software Engineering Research: How Far Can We Go?" submitted to TOSEM

tosem2021-personality-rep-package Replication package for the manuscript "Using Personality Detection Tools for Software Engineering Research: How Far

Collaborative Development Group 1 Dec 13, 2021
Azua - build AI algorithms to aid efficient decision-making with minimum data requirements.

Project Azua 0. Overview Many modern AI algorithms are known to be data-hungry, whereas human decision-making is much more efficient. The human can re

Microsoft 197 Jan 06, 2023
Spectralformer: Rethinking hyperspectral image classification with transformers

Spectralformer: Rethinking hyperspectral image classification with transformers Danfeng Hong, Zhu Han, Jing Yao, Lianru Gao, Bing Zhang, Antonio Plaza

Danfeng Hong 102 Dec 29, 2022
[NeurIPS 2021] Garment4D: Garment Reconstruction from Point Cloud Sequences

Garment4D [PDF] | [OpenReview] | [Project Page] Overview This is the codebase for our NeurIPS 2021 paper Garment4D: Garment Reconstruction from Point

Fangzhou Hong 112 Dec 23, 2022
ProFuzzBench - A Benchmark for Stateful Protocol Fuzzing

ProFuzzBench - A Benchmark for Stateful Protocol Fuzzing ProFuzzBench is a benchmark for stateful fuzzing of network protocols. It includes a suite of

155 Jan 08, 2023
Rainbow DQN implementation that outperforms the paper's results on 40% of games using 20x less data 🌈

Rainbow 🌈 An implementation of Rainbow DQN which outperforms the paper's (Hessel et al. 2017) results on 40% of tested games while using 20x less dat

Dominik Schmidt 31 Dec 21, 2022
TensorFlow Implementation of "Show, Attend and Tell"

Show, Attend and Tell Update (December 2, 2016) TensorFlow implementation of Show, Attend and Tell: Neural Image Caption Generation with Visual Attent

Yunjey Choi 902 Nov 29, 2022
Geneva is an artificial intelligence tool that defeats censorship by exploiting bugs in censors

Geneva is an artificial intelligence tool that defeats censorship by exploiting bugs in censors

Kevin Bock 1.5k Jan 06, 2023
Codes for building and training the neural network model described in Domain-informed neural networks for interaction localization within astroparticle experiments.

Domain-informed Neural Networks Codes for building and training the neural network model described in Domain-informed neural networks for interaction

DIDACTS 0 Dec 13, 2021
Graph InfoClust: Leveraging cluster-level node information for unsupervised graph representation learning

Graph-InfoClust-GIC [PAKDD 2021] PAKDD'21 version Graph InfoClust: Maximizing Coarse-Grain Mutual Information in Graphs Preprint version Graph InfoClu

Costas Mavromatis 21 Dec 03, 2022
FwordCTF 2021 Infrastructure and Source code of Web/Bash challenges

FwordCTF 2021 You can find here the source code of the challenges I wrote (Web and Bash) in FwordCTF 2021 and the source code of the platform with our

Kahla 5 Nov 25, 2022
Distributed Evolutionary Algorithms in Python

DEAP DEAP is a novel evolutionary computation framework for rapid prototyping and testing of ideas. It seeks to make algorithms explicit and data stru

Distributed Evolutionary Algorithms in Python 4.9k Jan 05, 2023
Group-Free 3D Object Detection via Transformers

Group-Free 3D Object Detection via Transformers By Ze Liu, Zheng Zhang, Yue Cao, Han Hu, Xin Tong. This repo is the official implementation of "Group-

Ze Liu 213 Dec 07, 2022