Code of the lileonardo team for the 2021 Emotion and Theme Recognition in Music task of MediaEval 2021

Overview

Emotion and Theme Recognition in Music

The repository contains code for the submission of the lileonardo team to the 2021 Emotion and Theme Recognition in Music task of MediaEval 2021 (results).

Requirements

  • python >= 3.7
  • pip install -r requirements.txt in a virtual environment
  • Download data from the MTG-Jamendo Dataset in data/jamendo. Audio files go to data/jamendo/mp3 and melspecs to data/jamendo/melspecs.
  • Process 128 bands mel spectrograms and store them in data/jamendo/melspecs2 by running:
    python preprocess.py experiments/preprocessing/melspecs2.json

Usage

Run python main.py experiments/DIR where DIR contains the parameters.

Parameters are overridable by command line arguments:

python main.py --help
usage: main.py [-h] [--data_dir DATA] [--num_workers NUM] [--restart_training] [--restore_name NAME]
               [--num_epochs EPOCHS] [--learning_rate LR] [--weight_decay WD] [--dropout DROPOUT]
               [--batch_size BS] [--manual_seed SEED] [--model MODEL] [--loss LOSS]
               [--calculate_stats]
               DIRECTORY

Train according to parameters in DIRECTORY

positional arguments:
  DIRECTORY            path of the directory containing parameters

optional arguments:
  -h, --help           show this help message and exit
  --data_dir DATA      path of the directory containing data (default: data)
  --num_workers NUM    number of workers for dataloader (default: 4)
  --restart_training   overwrite previous training (default is to resume previous training)
  --restore_name NAME  name of checkpoint to restore (default: last)
  --num_epochs EPOCHS  override number of epochs in parameters
  --learning_rate LR   override learning rate
  --weight_decay WD    override weight decay
  --dropout DROPOUT    override dropout
  --batch_size BS      override batch size
  --manual_seed SEED   override manual seed
  --model MODEL        override model
  --loss LOSS          override loss
  --calculate_stats    recalculate mean and std of data (default is to calculate only when they
                       don't exist in parameters)

Ensemble predictions

The predictions are averaged by running:

python average.py --outputs experiments/convs-m96*/predictions/test-last-swa-outputs.npy --targets experiments/convs-m96*/predictions/test-last-swa-targets.npy --preds_path predictions/convs.npy
python average.py --outputs experiments/filters-m128*/predictions/test-last-swa-outputs.npy --targets experiments/filters-m128*/predictions/test-last-swa-targets.npy --preds_path predictions/filters.npy
python average.py --outputs predictions/convs.npy predictions/filters.npy --targets predictions/targets.npy
Owner
Vincent Bour
Vincent Bour
SOLOv2 on onnx & tensorRT

SOLOv2.tensorRT: NOTE: code based on WXinlong/SOLO add support to TensorRT inference onnxruntime tensorRT full_dims and dynamic shape postprocess with

47 Nov 26, 2022
Colossal-AI: A Unified Deep Learning System for Large-Scale Parallel Training

ColossalAI An integrated large-scale model training system with efficient parallelization techniques. arXiv: Colossal-AI: A Unified Deep Learning Syst

HPC-AI Tech 7.9k Jan 08, 2023
GANimation: Anatomically-aware Facial Animation from a Single Image (ECCV'18 Oral) [PyTorch]

GANimation: Anatomically-aware Facial Animation from a Single Image [Project] [Paper] Official implementation of GANimation. In this work we introduce

Albert Pumarola 1.8k Dec 28, 2022
Combining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT

CheXbert: Combining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT CheXbert is an accurate, automated dee

Stanford Machine Learning Group 51 Dec 08, 2022
DLFlow is a deep learning framework.

DLFlow是一套深度学习pipeline,它结合了Spark的大规模特征处理能力和Tensorflow模型构建能力。利用DLFlow可以快速处理原始特征、训练模型并进行大规模分布式预测,十分适合离线环境下的生产任务。利用DLFlow,用户只需专注于模型开发,而无需关心原始特征处理、pipeline构建、生产部署等工作。

DiDi 152 Oct 27, 2022
Adversarial Autoencoders

Adversarial Autoencoders (with Pytorch) Dependencies argparse time torch torchvision numpy itertools matplotlib Create Datasets python create_datasets

Felipe Ducau 188 Jan 01, 2023
A framework for the elicitation, specification, formalization and understanding of requirements.

A framework for the elicitation, specification, formalization and understanding of requirements.

NASA - Software V&V 161 Jan 03, 2023
A convolutional recurrent neural network for classifying A/B phases in EEG signals recorded for sleep analysis.

CAP-Classification-CRNN A deep learning model based on Inception modules paired with gated recurrent units (GRU) for the classification of CAP phases

Apurva R. Umredkar 2 Nov 25, 2022
Face Recognize System on camera AI OAK1

FRS on OAK1 Face Recognize System on camera OAK1 This project contains our work that deploy on camera OAK1 Features Anti-Spoofing Face detection Face

Tran Anh Tuan 6 Aug 08, 2022
Offical implementation for "Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation".

Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation (NeurIPS 2021) by Qiming Hu, Xiaojie Guo. Dependencies P

Qiming Hu 31 Dec 20, 2022
PyTorch 1.5 implementation for paper DECOR-GAN: 3D Shape Detailization by Conditional Refinement.

DECOR-GAN PyTorch 1.5 implementation for paper DECOR-GAN: 3D Shape Detailization by Conditional Refinement, Zhiqin Chen, Vladimir G. Kim, Matthew Fish

Zhiqin Chen 72 Dec 31, 2022
This is the repository for our paper Ditch the Gold Standard: Re-evaluating Conversational Question Answering

Ditch the Gold Standard: Re-evaluating Conversational Question Answering This is the repository for our paper Ditch the Gold Standard: Re-evaluating C

Princeton Natural Language Processing 38 Dec 16, 2022
This is a library for training and applying sparse fine-tunings with torch and transformers.

This is a library for training and applying sparse fine-tunings with torch and transformers. Please refer to our paper Composable Sparse Fine-Tuning f

Cambridge Language Technology Lab 37 Dec 30, 2022
Official implementation for TTT++: When Does Self-supervised Test-time Training Fail or Thrive

TTT++ This is an official implementation for TTT++: When Does Self-supervised Test-time Training Fail or Thrive? TL;DR: Online Feature Alignment + Str

VITA lab at EPFL 39 Dec 25, 2022
Tensorflow implementation of the paper "HumanGPS: Geodesic PreServing Feature for Dense Human Correspondences", CVPR 2021.

HumanGPS: Geodesic PreServing Feature for Dense Human Correspondences Tensorflow implementation of the paper "HumanGPS: Geodesic PreServing Feature fo

Google Interns 50 Dec 21, 2022
Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

ImageProcessingTransformer Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

61 Jan 01, 2023
Backend code to use MCPI's python API to make infinite worlds with custom generation

inf-mcpi Backend code to use MCPI's python API to make infinite worlds with custom generation Does not save player-placed blocks! Generation is still

5 Oct 04, 2022
SEOVER: Sentence-level Emotion Orientation Vector based Conversation Emotion Recognition Model

SEOVER-Master This code is the implementation of paper: SEOVER: Sentence-level Emotion Orientation Vector based Conversation Emotion Recognition Model

4 Feb 24, 2022
Automatic Image Background Subtraction

Automatic Image Background Subtraction This repo contains set of scripts for automatic one-shot image background subtraction task using the following

Oleg Sémery 6 Dec 05, 2022
Object detection and instance segmentation toolkit based on PaddlePaddle.

Object detection and instance segmentation toolkit based on PaddlePaddle.

9.3k Jan 02, 2023