Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding

Overview

⚠️ Checkout develop branch to see what is coming in pyannote.audio 2.0:

Neural speaker diarization with pyannote-audio

pyannote.audio is an open-source toolkit written in Python for speaker diarization. Based on PyTorch machine learning framework, it provides a set of trainable end-to-end neural building blocks that can be combined and jointly optimized to build speaker diarization pipelines:

pyannote.audio also comes with pretrained models covering a wide range of domains for voice activity detection, speaker change detection, overlapped speech detection, and speaker embedding:

segmentation

Open In Colab

Installation

pyannote.audio only supports Python 3.7 (or later) on Linux and macOS. It might work on Windows but there is no garantee that it does, nor any plan to add official support for Windows.

The instructions below assume that pytorch has been installed using the instructions from https://pytorch.org.

$ pip install pyannote.audio==1.1.1

Documentation and tutorials

Until a proper documentation is released, note that part of the API is described in this tutorial.

Citation

If you use pyannote.audio please use the following citation

@inproceedings{Bredin2020,
  Title = {{pyannote.audio: neural building blocks for speaker diarization}},
  Author = {{Bredin}, Herv{\'e} and {Yin}, Ruiqing and {Coria}, Juan Manuel and {Gelly}, Gregory and {Korshunov}, Pavel and {Lavechin}, Marvin and {Fustes}, Diego and {Titeux}, Hadrien and {Bouaziz}, Wassim and {Gill}, Marie-Philippe},
  Booktitle = {ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing},
  Address = {Barcelona, Spain},
  Month = {May},
  Year = {2020},
}
Comments
  • [WIP] Multilabel Detection

    [WIP] Multilabel Detection

    This is a new PR for the VTC feature, this time based on a cleaner implem. I'm making a new PR as to keep the former branch "clean" (and prevent any mishaps).

    What is done:

    • renaming the SpeakerTracking task into a MultilabelDetection task
    • added MultilabelPipeline
    • update MultilabelFscore.report()
    • tested the new preprocessor

    What's to be done:

    • [ ] re-test the new implem on our clinical data (as well as the child data from @MarvinLvn )
    • [ ] maybe a couple of unit tests (especially for the preprocessor)
    • [ ] maybe make the aggregated "multilabel" fscore duration-based instead of file-based
    opened by hadware 29
  • Trying the diarization pipeline on random .wav files

    Trying the diarization pipeline on random .wav files

    Hey, as suggested by the detailed tutorials, i went through them and trained all the models required for the pipeline. The pipeline is working on the AMI dataset but when i try to reproduce the results on other .wav files sampled at 16k, mono, and 256bps, it is not able to diarize the audio. Here is the breif of what i actually did.

    1. Took a random meeting audio file, sampled at 16k , mono and 256bps
    2. renamed it to ES2003a and replaced it with actual ES2003a ( thought it as a turnaround of creating another database )
    3. ran all the pipelines ( sad,scd, emb, diarization )

    Output :

    1. Speaker activity detection works perfectly and is able to classify regions of speech.
    2. Speaker diarization does't works, everything is classified as 0

    can you please tell if its because of replacing the actual file that the pipeline is giving wrong outputs for the diarization, and whats a better way to test the pipeline on random audios.

    opened by saisumit 26
  • build error

    build error

    Hi, when I run pip install "pyannote.audio==0.3", I got the following error msg:

    In file included from _pysndfile.cpp:471:0: pysndfile.hh:55:21: fatal error: sndfile.h: No such file or directory #include <sndfile.h> ^ compilation terminated. error: command 'gcc' failed with exit status 1


    Failed building wheel for pysndfile Running setup.py clean for pysndfile Failed to build pysndfile

    cannot_reproduce 
    opened by ChristopherLu 24
  • Add support for file handle to pyannote.audio.core.io.Audio

    Add support for file handle to pyannote.audio.core.io.Audio

    This is not currently supported:

    from pyannote.audio.core.io import Audio
    from pyannote.core import Segment
    audio = Audio()
    with open('file.wav', 'rb') as f:
        waveform, sample_rate = audio(f)
    with open('file.wav', 'rb') as f:
        waveform, sample_rate = audio.crop(f, Segment(10, 20))
    

    One has to do this instead:

    from pyannote.audio.core.io import Audio
    from pyannote.core import Segment
    audio = Audio()
    waveform, sample_rate = audio('file.wav')
    waveform, sample_rate = audio.crop('file.wav', Segment(10, 20))
    

    This is a limitation that might be problematic (e.g. with streamlit.file_uploader that returns a file handle)

    v2 
    opened by hbredin 20
  • ValueError: inconsistent

    ValueError: inconsistent "classes" (is ['non_change', 'change'], should be: ['non_speech', 'speech'])

    Describe the bug I'm trying to go through the diarization pipeline tutorial on my own data.

    I am trying to run "apply" on my own data and model for speaker change detection. I get an error that looks like it's trying to apply speech activity detection

    ValueError: inconsistent "classes" (is ['non_change', 'change'], should be: ['non_speech', 'speech'])

    To Reproduce Steps to reproduce the behavior:

    $ export EXP_DIR=tutorials/pipelines/speaker_diarization 
    $ pyannote-audio scd  apply --step=0.1 --pretrained="<path to>/tutorials/models/speaker_change_detection/train/myData.SpeakerDiarization.general.train/validate_segmentation_fscore/myData.SpeakerDiarization.general.train" --subset=train ${EXP_DIR} myData.SpeakerDiarization.general
    
    

    pyannote environment

    $ pip freeze | grep pyannote
    pyannote.core==4.1
    pyannote.database==4.0.1
    pyannote.metrics==3.0.1
    pyannote.pipeline==1.5.2
    

    Additional context I only prepared a development set called "train" right now - so I'm running on that. I successfully ran the SAD apply step before moving to SCD.

    wontfix 
    opened by danFromTelAviv 20
  • An error was encountered while loading

    An error was encountered while loading "pyannote/speaker-diarization"

    Hello,when i run the code :

    from pyannote.audio import Pipeline
    pipeline = Pipeline.from_pretrained("pyannote/speaker-diarization",
                                        use_auth_token="my_token")
    

    I get an error :

    Traceback (most recent call last):
      File "/home/dg/anaconda3/envs/pyannote/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 213, in hf_raise_for_status
        response.raise_for_status()
      File "/home/dg/anaconda3/envs/pyannote/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status
        raise HTTPError(http_error_msg, response=self)
    requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/pyannote/segmentation/resolve/2022.07/pytorch_model.bin
    

    whether I use the read token role or the write token role. Anyone else know how to fix it? Thx.

    opened by Zpadger 18
  • [WIP] Feat/vtc

    [WIP] Feat/vtc

    This is a working PR on the future VTC implementation inspired from @MarvinLvn 's work, and to be merged into the next release of pyannote-audio.

    Note: nothing has been done yet, this is just to get things started.

    wontfix 
    opened by hadware 17
  • Trying to finetune model for new speaker

    Trying to finetune model for new speaker

    I am trying to finetune models to support one more speaker, but it looks I am doing something wrong.

    I want to use "dia_hard" pipeline, so I need to finetune models: {sad_dihard, scd_dihard, emb_voxceleb}.

    For my speaker I have one WAV file with duration more then 1 hour.

    So, I created database.yml file:

    Databases:
       IK: /content/fine/kirilov/{uri}.wav
    
    Protocols:
        IK:
           SpeakerDiarization:
              kirilov:
                train:
                   uri: train.lst
                   annotation: train.rttm
                   annotated: train.uem
    

    and put additional files near database.yml:

    kirilov
    ├── database.yml
    ├── kirilov.wav
    ├── train.lst
    ├── train.rttm
    └── train.uem
    

    train.lst: kirilov

    train.rttm: SPEAKER kirilov 1 0.0 3600.0 <NA> <NA> Kirilov <NA> <NA>

    train.uem: kirilov NA 0.0 3600.0

    I assume it will say trainer to use kirilov.wav file and take 3600 seconds of audio from it to use for training.

    Now I finetune the models, current folder is /content/fine/kirilov, so database.yml is taken from the current directory:

    !pyannote-audio sad train --pretrained=sad_dihard --subset=train --to=1 --parallel=4 "/content/fine/sad" IK.SpeakerDiarization.kirilov
    !pyannote-audio scd train --pretrained=scd_dihard --subset=train --to=1 --parallel=4 "/content/fine/scd" IK.SpeakerDiarization.kirilov
    !pyannote-audio emb train --pretrained=emb_voxceleb --subset=train --to=1 --parallel=4 "/content/fine/emb" IK.SpeakerDiarization.kirilov
    

    Output looks like:

    Using cache found in /root/.cache/torch/hub/pyannote_pyannote-audio_develop
    Loading labels: 0file [00:00, ?file/s]/usr/local/lib/python3.6/dist-packages/pyannote/database/protocol/protocol.py:128: UserWarning:
    
    Existing key "annotation" may have been modified.
    
    Loading labels: 1file [00:00, 20.49file/s]
    /usr/local/lib/python3.6/dist-packages/pyannote/audio/train/trainer.py:128: UserWarning:
    
    Did not load optimizer state (most likely because current training session uses a different loss than the one used for pre-training).
    
    2020-06-19 15:35:26.763592: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
    Training:   0%|                                        | 0/1 [00:00<?, ?epoch/s]
    Epoch pyannote/pyannote-database#1:   0%|                                       | 0/29 [00:00<?, ?batch/s]
    Epoch pyannote/pyannote-database#1:   0%|                           | 0/29 [00:00<?, ?batch/s, loss=0.676]
    Epoch pyannote/pyannote-database#1:   3%|▋                  | 1/29 [00:00<00:26,  1.04batch/s, loss=0.676]
    

    Etc.

    And try to run pipeline with new .pt's:

    import os
    import torch
    from pyannote.audio.pipeline import SpeakerDiarization
    pipeline = SpeakerDiarization(embedding = "/content/fine/emb/train/IK.SpeakerDiarization.kirilov.train/weights/0001.pt", 
                                  sad_scores = "/content/fine/sad/train/IK.SpeakerDiarization.kirilov.train/weights/0001.pt",
                                  scd_scores = "/content/fine/scd/train/IK.SpeakerDiarization.kirilov.train/weights/0001.pt",
                                  method= "affinity_propagation")
    
    #params from dia_dihard\train\X.SpeakerDiarization.DIHARD_Official.development\params.yml
    pipeline.load_params("/content/drive/My Drive/pyannote/params.yml")
    FILE = {'audio': "/content/groundtruth/new.wav"}
    diarization = pipeline(FILE)
    diarization
    

    The result is that for my new.wav the whole audio is recognized as speaker talking without pauses. So I assume that the models were broken. And it does not matter if I train for 1 epoch or for 100.

    In case I use:

    1. 0000.pt - I assume these are the original models
    pipeline = SpeakerDiarization(embedding = "/content/fine/emb/train/IK.SpeakerDiarization.kirilov.train/weights/0000.pt", 
                                  sad_scores = "/content/fine/sad/train/IK.SpeakerDiarization.kirilov.train/weights/0000.pt",
                                  scd_scores = "/content/fine/scd/train/IK.SpeakerDiarization.kirilov.train/weights/0000.pt",
                                  method= "affinity_propagation")
    

    or

    1. weights from original models
    pipeline = SpeakerDiarization(embedding = "/content/drive/My Drive/pyannote/emb_voxceleb/train/X.SpeakerDiarization.VoxCeleb.train/weights/0326.pt", 
                                 sad_scores = "/content/drive/My Drive/pyannote/sad_dihard/sad_dihard/train/X.SpeakerDiarization.DIHARD_Official.train/weights/0231.pt",
                                 scd_scores = "/content/drive/My Drive/pyannote/scd_dihard/train/X.SpeakerDiarization.DIHARD_Official.train/weights/0421.pt",
                                 method= "affinity_propagation")
    

    everything is ok and the result is similar to

    pipeline = torch.hub.load('pyannote/pyannote-audio', 'dia_dihard')
    FILE = {'audio': "/content/groundtruth/new.wav"}
    diarization = pipeline(FILE)
    diarization
    

    Could you please advise what could be wrong with my training\finetuning process?

    opened by marlon-br 17
  • `b c t` vs. `b t c`?

    `b c t` vs. `b t c`?

    Issue by hbredin Friday Oct 30, 2020 at 16:46 GMT Originally opened as https://github.com/hbredin/pyannote-audio-v2/issues/54


    Which convention should we use?

    v2 
    opened by mogwai 16
  • Segmentation Fault when conducting change detection tutorial

    Segmentation Fault when conducting change detection tutorial

    Hi,

    Everything seems ok for the feature extraction tutorial. But when I train the model following exactly what the tutorial asks me to do for change detection, I got segmentation fault. What might be probably the reason. Thank you for your help.

    opened by Charliechen1 16
  • Cannot find my Pretrained model

    Cannot find my Pretrained model

    I successfully trained an sad model. I want to create sad scores as part of the speaker diarization pipeline. I thought I am passing the weights correctly to the pyannote-audio script but the model is never found and the script aborts. Here is the output of my bash script with tracing on.

    This is the error message I get.

    RuntimeError: Cannot find callable /misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/models/speech_activity_detection/train/AMI.SpeakerDiarization.MixHeadset.train/weights/0101.pt in hubconf

    For readability, I have boldfaced the commands in the script.

    ++ export EXP_DIR=models/speaker_diarization ++ EXP_DIR=models/speaker_diarization ++ cd /misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2 ++ export PYANNOTE_DATABASE_CONFIG=/misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/database.yml ++ PYANNOTE_DATABASE_CONFIG=/misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/database.yml ++ sad_model=/misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/models/speech_activity_detection/train/AMI.SpeakerDiarization.MixHeadset.train/weights/0101.pt ++ ls /misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/models/speech_activity_detection/train/AMI.SpeakerDiarization.MixHeadset.train/weights/0101.pt /misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/models/speech_activity_detection/train/AMI.SpeakerDiarization.MixHeadset.train/weights/0101.pt ++ pyannote-audio sad apply --step=0.1 --pretrained=/misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/models/speech_activity_detection/train/AMI.SpeakerDiarization.MixHeadset.train/weights/0101.pt --subset=dev models/speaker_diarization AMI.SpeakerDiarization.MixHeadset Using cache found in /home/map22/.cache/torch/hub/pyannote_pyannote-audio_develop Traceback (most recent call last): File "/misc/vlgscratch4/PichenyGroup/picheny/anaconda3/envs/pyannote-feb32020/bin/pyannote-audio", line 8, in sys.exit(main()) File "/misc/vlgscratch4/PichenyGroup/picheny/anaconda3/envs/pyannote-feb32020/lib/python3.7/site-packages/pyannote/audio/applications/pyannote_audio.py", line 406, in main apply_pretrained(validate_dir, protocol, **params) File "/misc/vlgscratch4/PichenyGroup/picheny/anaconda3/envs/pyannote-feb32020/lib/python3.7/site-packages/pyannote/audio/applications/base.py", line 514, in apply_pretrained step=step) File "/misc/vlgscratch4/PichenyGroup/picheny/anaconda3/envs/pyannote-feb32020/lib/python3.7/site-packages/torch/hub.py", line 364, in load entry = _load_entry_from_hubconf(hub_module, model) File "/misc/vlgscratch4/PichenyGroup/picheny/anaconda3/envs/pyannote-feb32020/lib/python3.7/site-packages/torch/hub.py", line 237, in _load_entry_from_hubconf raise RuntimeError('Cannot find callable {} in hubconf'.format(model)) RuntimeError: Cannot find callable /misc/vlgscratch4/PichenyGroup/picheny/headcam/headcam-code-try2/models/speech_activity_detection/train/AMI.SpeakerDiarization.MixHeadset.train/weights/0101.pt in hubconf

    opened by picheny-nyu 15
  • TypeError: __init__() missing 1 required positional argument: 'signature' while reproducing change-detection tutorial

    TypeError: __init__() missing 1 required positional argument: 'signature' while reproducing change-detection tutorial

    While following this tutorial https://github.com/pyannote/pyannote-audio/tree/89da05ea9d6de97da9bd21949a26ceb0042ef361/tutorials/change-detection

    and while executing this " pyannote-change-detection train \ ${EXPERIMENT_DIR} \ AMI.SpeakerDiarization.MixHeadset "

    File "/home/jashwanth/miniconda3/envs/py36-pyannote-audio/bin/pyannote-change-detection", line 8, in sys.exit(main()) File "/home/jashwanth/miniconda3/envs/py36-pyannote-audio/lib/python3.6/site-packages/pyannote/audio/applications/change_detection.py", line 380, in main train(protocol, experiment_dir, train_dir, subset=subset) File "/home/jashwanth/miniconda3/envs/py36-pyannote-audio/lib/python3.6/site-packages/pyannote/audio/applications/change_detection.py", line 202, in train generator = ChangeDetectionBatchGenerator(feature_extraction) File "/home/jashwanth/miniconda3/envs/py36-pyannote-audio/lib/python3.6/site-packages/pyannote/audio/generators/change.py", line 93, in init segment_generator) TypeError: init() missing 1 required positional argument: 'signature' please can anyone help me with this!!

    Thank you in advance

    opened by Jashwantherao 0
  • How to select threshold and min_cluster_size values in clustering after I finetuned embedding model ?

    How to select threshold and min_cluster_size values in clustering after I finetuned embedding model ?

    Hi. Thanks for sharing great repository. I have a question. I finetuned embedding model in speaker diarization pipeline. After that, I don't know how to set threshold and min_cluster_size params in config.yaml file. Can you give me some advices ?

    opened by dungnguyen98 0
  • Fixes for PytorchLightning >= 1.8

    Fixes for PytorchLightning >= 1.8

    Adjust to PytorchLightning API changes in version 1.8.0. I did some testing to make sure nothing broke, including model training/finetuning and loading from pretrained/HF-hub; however, my tests likely didn't cover everything.

    opened by entn-at 1
  • Support MLflow along with Tensorboard for logging segmentation task visualizations during validation

    Support MLflow along with Tensorboard for logging segmentation task visualizations during validation

    When using PyTorch-Lightning's MLFlowLogger instead of TensorBoardLogger during training of segmentation models, the current implementation fails because MLflow's experiment tracking client has a different method for logging figures than Tensorboard. Unfortunately, PyTorch-Lightning doesn't abstract away this logger API difference.

    Tested with MLflow and Tensorboard.

    opened by entn-at 1
  • Will it work on real time streaming data ?

    Will it work on real time streaming data ?

    I am currently running it on Apple M2 chip it is taking way much time comparing to Colab. Is there a way that pipeline could be modified to streaming data, and combined with some transcription service ?

    opened by ankurdhuriya 0
Releases(1.1.1)
PeCo: Perceptual Codebook for BERT Pre-training of Vision Transformers

PeCo: Perceptual Codebook for BERT Pre-training of Vision Transformers

Microsoft 105 Jan 08, 2022
CVSS: A Massively Multilingual Speech-to-Speech Translation Corpus

CVSS: A Massively Multilingual Speech-to-Speech Translation Corpus CVSS is a massively multilingual-to-English speech-to-speech translation corpus, co

Google Research Datasets 118 Jan 06, 2023
This repository has a implementations of data augmentation for NLP for Japanese.

daaja This repository has a implementations of data augmentation for NLP for Japanese: EDA: Easy Data Augmentation Techniques for Boosting Performance

Koga Kobayashi 60 Nov 11, 2022
Explore different way to mix speech model(wav2vec2, hubert) and nlp model(BART,T5,GPT) together

SpeechMix Explore different way to mix speech model(wav2vec2, hubert) and nlp model(BART,T5,GPT) together. Introduction For the same input: from datas

Eric Lam 31 Nov 07, 2022
Easy-to-use CPM for Chinese text generation

CPM 项目描述 CPM(Chinese Pretrained Models)模型是北京智源人工智能研究院和清华大学发布的中文大规模预训练模型。官方发布了三种规模的模型,参数量分别为109M、334M、2.6B,用户需申请与通过审核,方可下载。 由于原项目需要考虑大模型的训练和使用,需要安装较为复杂

382 Jan 07, 2023
Levenshtein and Hamming distance computation

distance - Utilities for comparing sequences This package provides helpers for computing similarities between arbitrary sequences. Included metrics ar

112 Dec 22, 2022
The first online catalogue for Arabic NLP datasets.

Masader The first online catalogue for Arabic NLP datasets. This catalogue contains 200 datasets with more than 25 metadata annotations for each datas

ARBML 94 Dec 26, 2022
Predict the spans of toxic posts that were responsible for the toxic label of the posts

toxic-spans-detection An attempt at the SemEval 2021 Task 5: Toxic Spans Detection. The Toxic Spans Detection task of SemEval2021 required participant

Ilias Antonopoulos 3 Jul 24, 2022
BERTopic is a topic modeling technique that leverages 🤗 transformers and c-TF-IDF to create dense clusters allowing for easily interpretable topics whilst keeping important words in the topic descriptions

BERTopic BERTopic is a topic modeling technique that leverages 🤗 transformers and c-TF-IDF to create dense clusters allowing for easily interpretable

Maarten Grootendorst 3.6k Jan 07, 2023
基于GRU网络的句子判断程序/A program based on GRU network for judging sentences

SentencesJudger SentencesJudger 是一个基于GRU神经网络的句子判断程序,基本的功能是判断文章中的某一句话是否为一个优美的句子。 English 如何使用SentencesJudger 确认Python运行环境 安装pyTorch与LTP python3 -m pip

8 Mar 24, 2022
This project is part of Eleuther AI's quest to create a massive repository of high quality text data for training language models.

This project is part of Eleuther AI's quest to create a massive repository of high quality text data for training language models.

EleutherAI 42 Dec 13, 2022
Compute distance between sequences. 30+ algorithms, pure python implementation, common interface, optional external libs usage.

TextDistance TextDistance -- python library for comparing distance between two or more sequences by many algorithms. Features: 30+ algorithms Pure pyt

Life4 3k Jan 06, 2023
ChatBotProyect - This is an unfinished project about a simple chatbot.

chatBotProyect This is an unfinished project about a simple chatbot. (union_todo.ipynb) Reminders for the project: Find why one of the vectorizers fai

Tomás 0 Jul 24, 2022
A fast hierarchical dimensionality reduction algorithm.

h-NNE: Hierarchical Nearest Neighbor Embedding A fast hierarchical dimensionality reduction algorithm. h-NNE is a general purpose dimensionality reduc

Marios Koulakis 35 Dec 12, 2022
SGMC: Spectral Graph Matrix Completion

SGMC: Spectral Graph Matrix Completion Code for AAAI21 paper "Scalable and Explainable 1-Bit Matrix Completion via Graph Signal Learning". Data Format

Chao Chen 8 Dec 12, 2022
This repository contains the code for "Generating Datasets with Pretrained Language Models".

Datasets from Instructions (DINO 🦕 ) This repository contains the code for Generating Datasets with Pretrained Language Models. The paper introduces

Timo Schick 154 Jan 01, 2023
Constituency Tree Labeling Tool

Constituency Tree Labeling Tool The purpose of this package is to solve the constituency tree labeling problem. Look from the dataset labeled by NLTK,

张宇 6 Dec 20, 2022
Tool to check whether a GCP bucket is public or not.

Tool to check publicly accessible GCP bucket. Blog https://justm0rph3u5.medium.com/gcp-inspector-auditing-publicly-exposed-gcp-bucket-ac6cad55618c Wha

DIVYANSHU SHUKLA 7 Nov 24, 2022
Rootski - Full codebase for rootski.io (without the data)

📣 Welcome to the Rootski codebase! This is the codebase for the application run

Eric 20 Nov 18, 2022