Speech recognition tool to convert audio to text transcripts, for Linux and Raspberry Pi.

Overview

Spchcat

Speech recognition tool to convert audio to text transcripts, for Linux and Raspberry Pi.

Description

spchcat is a command-line tool that reads in audio from .WAV files, a microphone, or system audio inputs and converts any speech found into text. It runs locally on your machine, with no web API calls or network activity, and is open source. It is built on top of Coqui's speech to text library, TensorFlow, KenLM, and data from Mozilla's Common Voice project.

It supports multiple languages thanks to Coqui's library of models. The accuracy of the recognized text will vary widely depending on the language, since some have only small amounts of training data. You can help improve future models by contributing your voice.

Installation

x86

On Debian-based x86 Linux systems like Ubuntu you should be able to install the latest .deb package by downloading and double-clicking it. Other distributions are currently unsupported. The tool requires PulseAudio, which is already present on most desktop systems, but can be installed manually.

There's a notebook you can run in Colab at notebooks/install.ipynb that shows all installation steps.

Raspberry Pi

To install on a Raspberry Pi, download the latest .deb installer package and either double-click on it from the desktop, or run dpkg -i ~/Downloads/spchcat_0.0-2_armhf.deb from the terminal. It will take several minutes to unpack all the language files. This version has only been tested on the latest release of Raspbian, released October 30th 2021, and on a Raspberry Pi 4. It's expected to fail on Raspberry Pi 1's and 0's, due to their CPU architecture.

Usage

After installation, you should be able to run it with no arguments to start capturing audio from the default microphone source, with the results output to the terminal:

spchcat

After you've run the command, start speaking, and you should see the words you're saying appear. The speech recognition is still a work in progress, and the accuracy will depend a lot on the noise levels, your accent, and the complexity of the words, but hopefully you should see something close enough to be useful for simple note taking or other purposes.

System Audio

If you don't have a microphone attached, or want to transcribe audio coming from another program, you can set the --source argument to 'system'. This will attempt to listen to the audio that your machine is playing, including any videos or songs, and transcribe any speech found.

spchcat --source=system

WAV Files

One of the most common audio file formats is WAV. If you don't have any to test with, you can download Coqui's test set to try this option out. If you need to convert files from another format like '.mp3', I recommend using FFMPeg. As with the other source options, spchcat will attempt to find any speech in the files and convert it into a transcript. You don't have to explicitly set the --source argument, as long as file names are present on the command line that will be the default.

spchcat audio/8455-210777-0068.wav 

If you're using the audio file from the test set, you should see output like the following:

TensorFlow: v2.3.0-14-g4bdd3955115
 Coqui STT: v1.1.0-0-gf3605e23
your power is sufficient i said 

You can also specify a folder instead of a single filename, and all .wav files within that directory will be transcribed.

Language Support

So far this documentation has assumed you're using American English, but the tool will default to looking for the language your system has been configured to use. It first looks for the one specified in the LANG environment variable. If no model for that language is found, it will default back to 'en_US'. You can override this by setting the --language argument on the command line, for example:

spchcat --language=de_DE

This works independently of --source and other options, so you can transcribe microphone, system audio, or files in any of the supported languages. It should be noted that some languages have very small amounts of data and so their quality may suffer. If you don't care about country-specific variants, you can also just specify the language part of the code, for example --language=en. This will pick any model that supports the language, regardless of country. The same thing happens if a particular language and country pair isn't found, it will log a warning and fall back to any country that supports the language. For example, if 'en_GB' is specified but only 'en_US' is present, 'en_US' will be used.

Language Name Code
am_ET Amharic
bn_IN Bengali
br_FR Breton
ca_ES Catalan
cnh_MM Hakha-Chin
cs_CZ Czech
cv_RU Chuvash
cy_GB Welsh
de_DE German
dv_MV Dhivehi
el_GR Greek
en_US English
et_EE Estonian
eu_ES Basque
fi_FI Finnish
fr_FR French
fy_NL Frisian
ga_IE Irish
hu_HU Hungarian
id_ID Indonesian
it_IT Italian
ka_GE Georgian
ky_KG Kyrgyz
lg_UG Luganda
lt_LT Lithuanian
lv_LV Latvian
mn_MN Mongolian
mt_MT Maltese
nl_NL Dutch
or_IN Odia
pt_PT Portuguese
rm_CH Romansh-Sursilvan
ro_RO Romanian
ru_RU Russian
rw_RW Kinyarwanda
sah_RU Sakha
sb_DE Upper-Sorbian
sl_SI Slovenian
sw_KE Swahili-Congo
ta_IN Tamil
th_TH Thai
tr_TR Turkish
tt_RU Tatar
uk_UK Ukrainian
wo_SN Wolof
yo_NG Yoruba

All of these models have been collected by Coqui, and contributed by organizations like Inclusive Technology for Marginalized Languages or individuals. All are using the conventions for Coqui's STT library, so custom models could potentially be used, but training and deployment of those is outside the scope of this document. The models themselves are provided under a variety of open source licenses, which can be inspected in their source folders (typically inside /etc/spchcat/models/).

Saving Output

By default spchcat writes any recognized text to the terminal, but it's designed to behave like a normal Unix command-line tool, so it can also be written to a file using indirection like this:

spchcat audio/8455-210777-0068.wav > /tmp/transcript.txt

If you then run cat /tmp/transcript.txt (or open it in an editor) you should see `your power is sufficient i said'. You can also pipe the output to another command. Unfortunately you can't pipe audio into the tool from another executable, since pipes aren't designed for non-text data.

There is one subtle difference between writing to a file and to the terminal. The transcription itself can take some time to settle into a final form, especially when waiting for long words to finish, so when it's being run live in a terminal you'll often see the last couple of words change. This isn't useful when writing to a file, so instead the output is finalized before it's written. This can introduce a small delay when writing live microphone or system audio input.

Build from Source

Tool

It's possible to build all dependencies from source, but I recommending downloading binary versions of Coqui's STT, TensorFlow Lite, and KenLM libraries from github.com/coqui-ai/STT/releases/download/v1.1.0/native_client.tflite.Linux.tar.xz. Extract this to a folder, and then from inside a folder containing this repo run to build the spchcat tool itself:

make spchcat LINK_PATH_STT=-L../STT_download

You should replace ../STT_download with the path to the Coqui library folder. After this you should see a spchcat executable binary in the repo folder. Because it relies on shared libraries, you'll need to specify a path to these too using LD_LIBRARY_PATH unless you have copies in system folders.

LD_LIBRARY_PATH=../STT_download ./spchcat

Models

The previous step only built the executable binary itself, but for the complete tool you also need data files for each language. If you have the gh GitHub command line tool you can run the download_models.py script to fetch Coqui's releases into the build/models folder in your local repo. You can then run your locally-built tool against these models using the --languages_dir option:

LD_LIBRARY_PATH=../STT_download ./spchcat --languages_dir=build/models/

Installer

After you have the tool built and the model data downloaded, create_deb_package.sh will attempt to package them into a Debian installer archive. It will take several minutes to run, and the result ends up in spchcat_0.0-2_amd64.deb.

Release Process

There's a notebook at notebooks/build.pynb that runs through all the build steps needed to downloaded dependencies, data, build the executable, and create the final package. These steps are run inside an Ubuntu 18.04 Docker image to create the binaries that are released.

sudo docker run -it -v`pwd`:/spchcat ubuntu:bionic bash

Contributors

Tool code written by Pete Warden, [email protected], heavily based on Coqui's STT example. It's a pretty thin wrapper on top of Coqui's speech to text library, so the Coqui team should get credit for their amazing work. Also relies on TensorFlow, KenLM, data from Mozilla's Common Voice project, and all the contributors to Coqui's model zoo.

License

Tool code is licensed under the Mozilla Public License Version 2.0, see LICENSE in this folder.

All other libraries and model data are released under their own licenses, see the relevant folders for more details.

Comments
  • How can I use downloaded models?

    How can I use downloaded models?

    <From a user email, added here for posterity>

    I really need to use spchcat with spanish model (es_ES). I see the model in Coqui GitHub, but is not compiled in your .deb package. How can i recompile it to include spanish? Or, maybe are you compiling a new version including it?

    opened by petewarden 1
  • Running on Standby Mode for File Input

    Running on Standby Mode for File Input

    I am looking to use speechcat as in on demand .wav file transcription. However, I require the model to be preloaded and waiting for intermittent transcription of input .wav files. May I ask if there are plans to make such a feature?

    Environment

    uname -a
    Linux raspberrypi 5.15.32-v7+ #1538 SMP Thu Mar 31 19:38:48 BST 2022 armv7l GNU/Linux
    
    opened by kwokyto 0
  • Use feature test to expose `setenv`

    Use feature test to expose `setenv`

    As per the man page, setenv requires _POSIX_C_SOURCE >= 200112L to be defined before including the appropriate header file (stdlib.h). As the other included header files include some standard headers transitively, this needs to go above all includes.

    opened by msbit 0
  • Use float literals for `TEST_FLTEQ`

    Use float literals for `TEST_FLTEQ`

    When using the TEST_FLTEQ macro, pass float literals for the comparision argument, to avoid errors of the form:

    error: absolute value function 'fabsf' given an argument of
    type 'double' but has parameter of type 'float' which may cause
    truncation of value [-Werror,-Wabsolute-value]
    
    opened by msbit 0
  • Avoid possible infinite loop due to chunk ordering

    Avoid possible infinite loop due to chunk ordering

    Properly re-read the chunk ID when iterating through subsequent chunks. This avoids an infinite loop in the case where the data chunk doesn't immediately follow the fmt chunk.

    opened by msbit 0
  • Not working on a Raspberry Pi

    Not working on a Raspberry Pi

    I am trying to get spchcat working on my raspberry pi. When running the command it is printing in the console this:

    TensorFlow: v2.3.0-14-g4bdd3955115
     Coqui STT: v1.1.0-0-gf3605e23
    

    and shortly after the console clears and displays eddie with no audio input? When I speak nothing appears in the console.

    opened by MiniMinnoww 2
Releases(v0.0.2-rpi-alpha)
Owner
Pete Warden
Pete Warden
Emotion Recognition from Facial Images

Reconhecimento de Emoções a partir de imagens faciais Este projeto implementa um classificador simples que utiliza técncias de deep learning e transfe

Gabriel 2 Feb 09, 2022
Official repo for our 3DV 2021 paper "Monocular 3D Reconstruction of Interacting Hands via Collision-Aware Factorized Refinements".

Monocular 3D Reconstruction of Interacting Hands via Collision-Aware Factorized Refinements Yu Rong, Jingbo Wang, Ziwei Liu, Chen Change Loy Paper. Pr

Yu Rong 41 Dec 13, 2022
Implementation for Shape from Polarization for Complex Scenes in the Wild

sfp-wild Implementation for Shape from Polarization for Complex Scenes in the Wild project website | paper Code and dataset will be released soon. Int

Chenyang LEI 41 Dec 23, 2022
ByteTrack: Multi-Object Tracking by Associating Every Detection Box

ByteTrack ByteTrack is a simple, fast and strong multi-object tracker. ByteTrack: Multi-Object Tracking by Associating Every Detection Box Yifu Zhang,

Yifu Zhang 2.9k Jan 04, 2023
Real-Time Semantic Segmentation in Mobile device

Real-Time Semantic Segmentation in Mobile device This project is an example project of semantic segmentation for mobile real-time app. The architectur

708 Jan 01, 2023
Rest API Written In Python To Classify NSFW Images.

Rest API Written In Python To Classify NSFW Images.

Wahyusaputra 2 Dec 23, 2021
Some bravo or inspiring research works on the topic of curriculum learning.

Towards Scalable Unpaired Virtual Try-On via Patch-Routed Spatially-Adaptive GAN Official code for NeurIPS 2021 paper "Towards Scalable Unpaired Virtu

131 Jan 07, 2023
Source code for paper: Knowledge Inheritance for Pre-trained Language Models

Knowledge-Inheritance Source code paper: Knowledge Inheritance for Pre-trained Language Models (preprint). The trained model parameters (in Fairseq fo

THUNLP 31 Nov 19, 2022
Code for the RA-L (ICRA) 2021 paper "SeqNet: Learning Descriptors for Sequence-Based Hierarchical Place Recognition"

SeqNet: Learning Descriptors for Sequence-Based Hierarchical Place Recognition [ArXiv+Supplementary] [IEEE Xplore RA-L 2021] [ICRA 2021 YouTube Video]

Sourav Garg 63 Dec 12, 2022
MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Resolution (CVPR2021)

MASA-SR Official PyTorch implementation of our CVPR2021 paper MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Re

DV Lab 126 Dec 20, 2022
Code for A Volumetric Transformer for Accurate 3D Tumor Segmentation

VT-UNet This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmentaion results of VT-UNet. Environmen

Himashi Amanda Peiris 114 Dec 20, 2022
This repo includes the CUB-GHA (Gaze-based Human Attention) dataset and code of the paper "Human Attention in Fine-grained Classification".

HA-in-Fine-Grained-Classification This repo includes the CUB-GHA (Gaze-based Human Attention) dataset and code of the paper "Human Attention in Fine-g

16 Oct 29, 2022
Pytorch implementation of "MOSNet: Deep Learning based Objective Assessment for Voice Conversion"

MOSNet pytorch implementation of "MOSNet: Deep Learning based Objective Assessment for Voice Conversion" https://arxiv.org/abs/1904.08352 Dependency L

9 Nov 18, 2022
Neural Point-Based Graphics

Neural Point-Based Graphics Project   Video   Paper Neural Point-Based Graphics Kara-Ali Aliev1 Artem Sevastopolsky1,2 Maria Kolos1,2 Dmitry Ulyanov3

Ali Aliev 252 Dec 13, 2022
PyTorch Implementation of SSTNs for hyperspectral image classifications from the IEEE T-GRS paper "Spectral-Spatial Transformer Network for Hyperspectral Image Classification: A FAS Framework."

PyTorch Implementation of SSTN for Hyperspectral Image Classification Paper links: SSTN published on IEEE T-GRS. Also, you can directly find the imple

Zilong Zhong 54 Dec 19, 2022
SeisComP/SeisBench interface to enable deep-learning (re)picking in SeisComP

scdlpicker SeisComP/SeisBench interface to enable deep-learning (re)picking in SeisComP Objective This is a simple deep learning (DL) repicker module

Joachim Saul 6 May 13, 2022
[ICCV 2021] Released code for Causal Attention for Unbiased Visual Recognition

CaaM This repo contains the codes of training our CaaM on NICO/ImageNet9 dataset. Due to my recent limited bandwidth, this codebase is still messy, wh

Wang Tan 66 Dec 31, 2022
这是一个yolox-pytorch的源码,可以用于训练自己的模型。

YOLOX:You Only Look Once目标检测模型在Pytorch当中的实现 目录 性能情况 Performance 实现的内容 Achievement 所需环境 Environment 小技巧的设置 TricksSet 文件下载 Download 训练步骤 How2train 预测步骤

Bubbliiiing 613 Jan 05, 2023
SymmetryNet: Learning to Predict Reflectional and Rotational Symmetries of 3D Shapes from Single-View RGB-D Images

SymmetryNet SymmetryNet: Learning to Predict Reflectional and Rotational Symmetries of 3D Shapes from Single-View RGB-D Images ACM Transactions on Gra

26 Dec 05, 2022
SSD: A Unified Framework for Self-Supervised Outlier Detection [ICLR 2021]

SSD: A Unified Framework for Self-Supervised Outlier Detection [ICLR 2021] Pdf: https://openreview.net/forum?id=v5gjXpmR8J Code for our ICLR 2021 pape

Princeton INSPIRE Research Group 113 Nov 27, 2022