This library provides common speech features for ASR including MFCCs and filterbank energies.

Overview

python_speech_features

This library provides common speech features for ASR including MFCCs and filterbank energies. If you are not sure what MFCCs are, and would like to know more have a look at this MFCC tutorial

Project Documentation

To cite, please use: James Lyons et al. (2020, January 14). jameslyons/python_speech_features: release v0.6.1 (Version 0.6.1). Zenodo. http://doi.org/10.5281/zenodo.3607820

Installation

This project is on pypi

To install from pypi:

pip install python_speech_features

From this repository:

git clone https://github.com/jameslyons/python_speech_features
python setup.py develop

Usage

Supported features:

  • Mel Frequency Cepstral Coefficients
  • Filterbank Energies
  • Log Filterbank Energies
  • Spectral Subband Centroids

Example use

From here you can write the features to a file etc.

MFCC Features

The default parameters should work fairly well for most cases, if you want to change the MFCC parameters, the following parameters are supported:

python
def mfcc(signal,samplerate=16000,winlen=0.025,winstep=0.01,numcep=13,
                 nfilt=26,nfft=512,lowfreq=0,highfreq=None,preemph=0.97,
     ceplifter=22,appendEnergy=True)
Parameter Description
signal the audio signal from which to compute features. Should be an N*1 array
samplerate the samplerate of the signal we are working with.
winlen the length of the analysis window in seconds. Default is 0.025s (25 milliseconds)
winstep the step between successive windows in seconds. Default is 0.01s (10 milliseconds)
numcep the number of cepstrum to return, default 13
nfilt the number of filters in the filterbank, default 26.
nfft the FFT size. Default is 512
lowfreq lowest band edge of mel filters. In Hz, default is 0
highfreq highest band edge of mel filters. In Hz, default is samplerate/2
preemph apply preemphasis filter with preemph as coefficient. 0 is no filter. Default is 0.97
ceplifter apply a lifter to final cepstral coefficients. 0 is no lifter. Default is 22
appendEnergy if this is true, the zeroth cepstral coefficient is replaced with the log of the total frame energy.
returns A numpy array of size (NUMFRAMES by numcep) containing features. Each row holds 1 feature vector.

Filterbank Features

These filters are raw filterbank energies. For most applications you will want the logarithm of these features. The default parameters should work fairly well for most cases. If you want to change the fbank parameters, the following parameters are supported:

python
def fbank(signal,samplerate=16000,winlen=0.025,winstep=0.01,
      nfilt=26,nfft=512,lowfreq=0,highfreq=None,preemph=0.97)
Parameter Description
signal the audio signal from which to compute features. Should be an N*1 array
samplerate the samplerate of the signal we are working with
winlen the length of the analysis window in seconds. Default is 0.025s (25 milliseconds)
winstep the step between successive windows in seconds. Default is 0.01s (10 milliseconds)
nfilt the number of filters in the filterbank, default 26.
nfft the FFT size. Default is 512.
lowfreq lowest band edge of mel filters. In Hz, default is 0
highfreq highest band edge of mel filters. In Hz, default is samplerate/2
preemph apply preemphasis filter with preemph as coefficient. 0 is no filter. Default is 0.97
returns A numpy array of size (NUMFRAMES by nfilt) containing features. Each row holds 1 feature vector. The second return value is the energy in each frame (total energy, unwindowed)

Reference

sample english.wav obtained from:

wget http://voyager.jpl.nasa.gov/spacecraft/audio/english.au
sox english.au -e signed-integer english.wav
Comments
  • Question about hamming window length

    Question about hamming window length

    When using mfcc, the window parameter can use numpy.hamming, but numpy.hamming is a funtion, and it can take an int input as number of points in the output window. See numpy.hamming Doc. However, frame_len is used in sigproc.py Could you please how does the np.hamming work in mfcc? What if I want to input a specific window length? Thank you !

    opened by leeeeeeo 14
  • obtain the noise data

    obtain the noise data

    Hi, my aim is to get the noise data from a audio file, which is from classroom, it is meaning that i want to remove the teachers voice. So, what should I do ?

    opened by YueWenWu 7
  • How to ignore the NFFT warning

    How to ignore the NFFT warning

    TX3NH0WQ451{_PJD 7O BML

    I don't want to increase the NFFT cause I think it is acceptable to distortion. So may there is any way to ignore the annoying warnings?

    I have tried using module warning , but it dosen't work.

    opened by igo312 5
  • Why there's big difference using 16k and 44.1k sample rate

    Why there's big difference using 16k and 44.1k sample rate

    Hi: I recorded some wav file originally in 44.1k sample rate, and then I convert this file to 16k by sox. After that I use this python script to caculate the MFCC feature of the 44.1k file and 16k file, but found that the result was completely different. And one same file no matter convert to 44.1k or 16k, I think the result should be the same. Isn't that ?

    opened by robotnc 5
  • What if the frame length is greater than NFFT?

    What if the frame length is greater than NFFT?

    I'm not an expert in this kind of stuff, so I'm sorry if this will be a waste of time.

    From the numpy.fft.rfft documentation [in our case: n=NFTT, input=frame]: "Number of points along transformation axis in the input to use. If n is smaller than the length of the input, the input is cropped. If it is larger, the input is padded with zeros. If n is not given, the length of the input along the axis specified by axis is used."

    Is not this cropping something we want to avoid? Because, as far as I've seen, there's not any check in the code about how the frame size compares to NFTT.

    opened by janluke 4
  • inconsistent result with HTK

    inconsistent result with HTK

    Hi,

    I tried to compare the MFCC features generated using HTK, and those generated by python_speech_features. Unfortunately, somehow they always mismatch.

    Below is the configuration I used for HTK

    SOURCEFORMAT = NIST
    TARGETKIND = MFCC_0
    TARGETRATE = 100000
    SAVECOMPRESSED = F
    SAVEWITHCRC = F
    WINDOWSIZE = 250000
    USEHAMMING = F
    PREEMCOEF = 0.97
    NUMCHANS = 26
    CEPLIFTER = 22
    NUMCEPS = 12
    ENORMALISE = F
    

    The configuration for python_speech_features is default.

    I also tried adding USEPOWER = F/T, and still features obtained are very different (actually, for file TIMITcorpus/TIMIT/TRAIN/DR8/FBCG1/SX442, I got 358 frames for HTK, but only 354 frames for python_speech_features.

    Any insight? I'm a newbie in speech recognition, and may have committed some silly mistakes..

    opened by zym1010 4
  • Troubles when porting

    Troubles when porting

    Hi, I am trying to port this algorithm to JavaScript and I am running into the following:

    feat = numpy.dot(pspec,fb.T)
    

    (https://github.com/jameslyons/python_speech_features/blob/master/features/base.py#L56)

    The issue I am running into is that pspec and fb here should have the same dimensions, but for some reason they don't. Is there something in the algorithm, some kind of balance between parameters for example, which should cause these two arrays to have the same dimensions?

    opened by mauritslamers 4
  • [Question:] How to capture intensity or perceived loudness of a given audio file at regular intervals

    [Question:] How to capture intensity or perceived loudness of a given audio file at regular intervals

    If you are playing a song on your laptop, As you increase the volume from 0 to 100, the audio becomes louder and louder.

    Say I have an .mp3 or .wav , how do I capture this ^ perceived loudness/intensity at regular intervals (may be 0.1 second) in the audio using python speech features?

    Any advice is appreciated.

    Thanks Vivek

    opened by StanSilas 3
  • can't get same result as compute-mfcc-feats.

    can't get same result as compute-mfcc-feats.

    compute-mfcc-feats --window-type=hamming --dither=0.0 --use-energy=false --sample-frequency=8000 --num-mel-bins=40 --num-ceps=40 --low-freq=40 --raw-energy=false --remove-dc-offset=false --high-freq=3800 scp:wav.scp ark,scp:feats.ark,feats.scp

    mfcc(signal=sig, samplerate=rate, winlen=0.025, winstep=0.01, numcep=40, nfilt=40, lowfreq=40, highfreq=3800, appendEnergy=False, winfunc = lambda x: np.hamming(x) )

    is there some difference ?

    opened by bjtommychen 3
  • inconsistency with librosa

    inconsistency with librosa

    I compared the mfcc of librosa with python_speech_analysis package and got totally different results.

    Which one is correct? librosa list of first frame coefficients:

    [-395.07433842032867, -7.1149347948192963e-14, 3.5772469223901538e-14, -1.7476140989485184e-14, 3.1665300829452658e-14, -4.4214136625668904e-14, 6.7157035631648599e-14, 1.5013974158050108e-14, 2.9512326634271699e-14, 7.2275398398734558e-14, -1.5043753316598812e-13, -2.2358383003147776e-14, 1.6209256159527285e-13]

    python_speech_analysis list of first frame coefficients:

    [-169.91598446684722, 1.3219891974654943, 0.22216979881740945, -0.7368248288464827, 0.26268194306407788, 1.8470757480486224, 3.2670900572694435, 2.3726120692753563, 1.4983949546889608, 0.67862219561000914, -0.44705590991616034, 0.39184067109778226, -0.48048214059101707]

    import librosa
    import python_speech_features
    from scipy.signal.windows import hann
    
    n_mfcc = 13
    n_mels = 40
    n_fft = 512 # in librosa, win_length is assumed to be equal to n_fft implicitly
    hop_length = 160
    fmin = 0
    fmax = None
    y, sr = librosa.load(librosa.util.example_audio_file())
    sr = 16000  # fake sample rate just to make the point
    
    # librosa
    mfcc_librosa = librosa.feature.mfcc(y=y, sr=sr, n_fft=n_fft,
                                        n_mfcc=n_mfcc, n_mels=n_mels,
                                        hop_length=hop_length,
                                        fmin=fmin, fmax=fmax)
    
    # python_speech_features
    # no preemph nor ceplifter in librosa, so setting to zero
    # librosa default stft window is hann
    mfcc_speech = python_speech_features.mfcc(signal=y, samplerate=sr, winlen=n_fft / sr, winstep=hop_length / sr,
                                              numcep=n_mfcc, nfilt=n_mels, nfft=n_fft, lowfreq=fmin, highfreq=fmax,
                                              preemph=0, ceplifter=0, appendEnergy=False, winfunc=hann)
    
    
    print(list(mfcc_librosa[:, 0]))
    print(list(mfcc_speech[0, :]))
    
    opened by chananshgong 3
  • Filterbank=80

    Filterbank=80

    It works fine for filterbank=40.But when I try for 80, the third filterbank out is constant value like this -36.04365,-36.04365,-36.04365,-36.04365,-36.04365,-36.04365

    I have attached the image showing speech,spectrogram, logfilterbank for 80 filters screenshot from 2015-12-03 11 13 31

    opened by madhavsund 3
  • Use another augmented assignment statement

    Use another augmented assignment statement

    :eyes: Some source code analysis tools can help to find opportunities for improving software components. :thought_balloon: I propose to increase the usage of augmented assignment statements accordingly.

    diff --git a/python_speech_features/sigproc.py b/python_speech_features/sigproc.py
    index a786c4f..b8729ea 100644
    --- a/python_speech_features/sigproc.py
    +++ b/python_speech_features/sigproc.py
    @@ -84,7 +84,7 @@ def deframesig(frames, siglen, frame_len, frame_step, winfunc=lambda x: numpy.on
                                                    indices[i, :]] + win + 1e-15  # add a little bit so it is never zero
             rec_signal[indices[i, :]] = rec_signal[indices[i, :]] + frames[i, :]
     
    -    rec_signal = rec_signal / window_correction
    +    rec_signal /= window_correction
         return rec_signal[0:siglen]
     
     
    
    opened by elfring 0
  • High CPU Utilization

    High CPU Utilization

    I observe that exetacting MFCC or MFB features utilizes almost all the CPU with 100% capacity. I am sure that extracting these features doesn't requires so much of computation.

    I am processing only one file at a time and not using any parallalization. Here is the code I am using

    import glob
    import numpy as np
    import scipy.io as sio
    import scipy.io.wavfile
    from python_speech_features import *
    filelist = glob.glob("/home/divraj/scribetech/dataset/voxceleb1/test/wav/*/*/*.wav")
    for file in filelist:
    	sr, audio = sio.wavfile.read(file)
    	features, energies = fbank(audio, samplerate=16000, nfilt=40, winlen=0.025, winfunc=np.hamming)
    

    What is the reason for high CPU utilization?

    opened by divyeshrajpura4114 2
  • viseme generation

    viseme generation

    @jameslyons thank you so much for this repo. Hi everyone I am trying to produce viseme from audio. can you please guide me about how can i generate viseme using this repo. or any other repo you may refer or another relevant work which I can extend to my main goal. Would really appreciate any help

    opened by AhmadManzoor 0
  • [Question:] inverse fbank back to wav

    [Question:] inverse fbank back to wav

    Hi, thanks for the library. I use it to compute the fbank, do some stuff on them, and than i get a new one. Is there a way to convert the new fbank back to the waw? I have the original starting file theoretically is possible.

    opened by matdtr 0
  • Minor issue on round vs. floor

    Minor issue on round vs. floor

    In this line:

    https://github.com/jameslyons/python_speech_features/blob/9a2d76c6336d969d51ad3aa0d129b99297dcf55e/python_speech_features/base.py#L169

    I think you are assuming that np.floor(t+1) = np.round(t), but that is not true. I think your want:

    bin = numpy.round((nfft)*mel2hz(melpoints)/samplerate) bin = numpy.floor((nfft+0.5)*mel2hz(melpoints)/samplerate)

    This is a minor point because they often give the same and it doesn't matter in practice. I just found this point a little confusing in your write-up.

    Thanks for the blogpost and this code!

    opened by keithchugg 0
Releases(0.6.1)
Owner
James Lyons
James Lyons
Official implementation of A cappella: Audio-visual Singing VoiceSeparation, from BMVC21

Y-Net Official implementation of A cappella: Audio-visual Singing VoiceSeparation, British Machine Vision Conference 2021 Project page: ipcv.github.io

Juan F. Montesinos 12 Oct 22, 2022
This Is Telegram Music UserBot To Play Music Without Being Admin

This Is Telegram Music UserBot To Play Music Without Being Admin

Krishna Kumar 36 Sep 13, 2022
Stevan KZ 1 Oct 27, 2021
Synthesia but open source, made in python and free

PyPiano Synthesia but open source, made in python and free Requirements are in requirements.txt If you struggle with installation of pyaudio, run : pi

DaCapo 11 Nov 06, 2022
Use python MIDI to write some simple music

Use Python MIDI to write songs

小宝 1 Nov 19, 2021
MIDI-DDSP: Detailed Control of Musical Performance via Hierarchical Modeling

MIDI-DDSP: Detailed Control of Musical Performance via Hierarchical Modeling Demos | Blog Post | Colab Notebook | Paper | MIDI-DDSP is a hierarchical

Magenta 239 Jan 03, 2023
Automatically move or copy files based on metadata associated with the files. For example, file your photos based on EXIF metadata or use MP3 tags to file your music files.

Automatically move or copy files based on metadata associated with the files. For example, file your photos based on EXIF metadata or use MP3 tags to file your music files.

Rhet Turnbull 14 Nov 02, 2022
Audio spatialization over WebRTC and JACK Audio Connection Kit

Audio spatialization over WebRTC Spatify provides a framework for building multichannel installations using WebRTC.

Bruno Gola 34 Jun 29, 2022
LibXtract is a simple, portable, lightweight library of audio feature extraction functions.

LibXtract LibXtract is a simple, portable, lightweight library of audio feature extraction functions. The purpose of the library is to provide a relat

Jamie Bullock 215 Nov 16, 2022
Port Hitsuboku Kumi Chinese CVVC voicebank to deepvocal. / 筆墨クミDeepvocal中文音源

Hitsuboku Kumi (筆墨クミ) is a UTAU virtual singer developed by Cubialpha. This project ports Hitsuboku Kumi Chinese CVVC voicebank to deepvocal. This is the first open-source deepvocal voicebank on Gith

8 Apr 26, 2022
Audio Retrieval with Natural Language Queries: A Benchmark Study

Audio Retrieval with Natural Language Queries: A Benchmark Study Paper | Project page | Text-to-audio search demo This repository is the implementatio

21 Oct 31, 2022
Deep learning transformer model that generates unique music sequences.

music-ai Deep learning transformer model that generates unique music sequences. Abstract In 2017, a new state-of-the-art was published for natural lan

xacer 6 Nov 19, 2022
A Python 3 script for capturing and recording a SDR stream to a WAV file (or serving it to a HTTP audio stream).

rfsoapyfile A Python 3 script for capturing and recording a SDR stream to a WAV file (or serving it to a HTTP audio stream). The script is threaded fo

4 Dec 19, 2022
Datamoshing with FFmpeg

ffmosher Datamoshing with FFmpeg Drag and drop video onto mosh.bat to create a datamoshed video. To datamosh an image, please ensure the file is in a

18 Sep 11, 2022
An audio-solving python funcaptcha solving module

funcapsolver funcapsolver is a funcaptcha audio-solving module, which allows captchas to be interacted with and solved with the use of google's speech

Acier 8 Nov 21, 2022
Muzic: Music Understanding and Generation with Artificial Intelligence

Muzic is a research project on AI music that empowers music understanding and generation with deep learning and artificial intelligence.

Microsoft 2.6k Dec 30, 2022
Analysis of voices based on the Mel-frequency band

Speaker_partition_module Analysis of voices based on the Mel-frequency band. Goal: Identification of voices speaking (diarization) and calculation of

1 Feb 06, 2022
digital audio workstation, instrument and effect plugins, wave editor

digital audio workstation, instrument and effect plugins, wave editor

306 Jan 05, 2023
A Simple Script that will help you to Play / Change Songs with just your Voice

Auto-Spotify using Voice Recognition A Simple Script that will help you to Play / Change Songs with just your Voice Explore the docs » Table of Conten

Mehul Shah 1 Nov 21, 2021
FPGA based USB 2.0 high speed audio interface featuring multiple optical ADAT inputs and outputs

ADAT USB Audio Interface FPGA based USB 2.0 High Speed audio interface featuring multiple optical ADAT inputs and outputs Status / current limitations

Hans Baier 78 Dec 31, 2022