Persian Kaldi profile for Rhasspy built from open speech data

Overview

Persian Kaldi Profile

A Rhasspy profile for Persian (fa).

Installation

Get started by first installing Vosk:

# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate
pip3 install --upgrade pip
pip3 install --upgrade wheel setuptools

# Install Vosk
pip3 install vosk

Next, download the model and extract it:

wget 'https://github.com/rhasspy/fa_kaldi-rhasspy/releases/download/v1.0/vosk-model-small-fa-rhasspy-0.15.zip'
unzip vosk-model-small-fa-rhasspy-0.15.zip

Finally, run the transcribe.py Python program with the model and an audio file:

python3 transcribe.py vosk-model-small-fa-rhasspy-0.15 welcome.wav

{"result": [{"conf": 1.0, "end": 0.48, "start": 0.06, "word": "خوش"}, {"conf": 1.0, "end": 1.11, "start": 0.48, "word": "آمدید"}], "text": "خوش آمدید"}

For each audio file given to transcribe.py, a line of JSON will be printed in the output with the transcription details.

You might also like...
Service for working with open data of the State Duma of the Russian Federation
Service for working with open data of the State Duma of the Russian Federation

Сервис для работы с открытыми данными Госдумы РФ Исходные данные из API Госдумы РФ извлекаются с помощью Apache Nifi и приземляются в хранилище Clickh

Driving lessons made simpler. Custom scheduling API built with Python.
Driving lessons made simpler. Custom scheduling API built with Python.

NOTE This is a mirror of a GitLab repository. Dryvo Dryvo is a unique solution for the driving lessons industry. Our aim is to save the teacher’s time

Ikaros is a free financial library built in pure python that can be used to get information for single stocks, generate signals and build prortfolios

Ikaros is a free financial library built in pure python that can be used to get information for single stocks, generate signals and build prortfolios

This repository contains Python Projects for Beginners as well as for Intermediate Developers built by Contributors.
This repository contains Python Projects for Beginners as well as for Intermediate Developers built by Contributors.

Python Projects {Open Source} Introduction The repository was built with a tree-like structure in mind, it contains collections of Python Projects. Mo

Here, I have discuss the three methods of list reversion. The three methods are built-in method, slicing method and position changing method.

Three-different-method-for-list-reversion Here, I have discuss the three methods of list reversion. The three methods are built-in method, slicing met

Dot Browser is a privacy-conscious web browser with smarts built-in for protection against trackers and advertisments online.
Dot Browser is a privacy-conscious web browser with smarts built-in for protection against trackers and advertisments online.

🌍 Take back your privacy with Dot Browser, the privacy-conscious web browser that protects you from being tracked and monitored online.

Built with Python programming language and QT library and Guess the number in three easy, medium and hard rolls
Built with Python programming language and QT library and Guess the number in three easy, medium and hard rolls

guess-the-numbers Built with Python programming language and QT library and Guess the number in three easy, medium and hard rolls Number guessing game

Built with Python programming language and QT library and Guess the number in three easy, medium and hard rolls
Built with Python programming language and QT library and Guess the number in three easy, medium and hard rolls

password-generator Built with Python programming language and QT library and Guess the number in three easy, medium and hard rolls Password generator

Comments
  •  PySoundFile failed. Trying audioread instead.

    PySoundFile failed. Trying audioread instead.

    I just tried to run this command: python3 transcribe.py vosk-model-small-fa-rhasspy-0.15 MyFile.mp3

    and got this error:

    /your/path/.venv/lib/python3.9/site-packages/librosa/util/decorators.py:88: UserWarning: PySoundFile failed. Trying audioread instead.
      return f(*args, **kwargs)  
    

    Thank you so much

    opened by GameO7er 1
  • ModuleNotFoundError: No module named 'librosa'

    ModuleNotFoundError: No module named 'librosa'

    I got this error when I just did follow your instruction in the Readme.md line by line. So I thought maybe this help others for running the script successfully.

    Traceback (most recent call last):
      File "/home/gameover/Projects/Python/Rhaspy/transcribe.py", line 8, in <module>
        import librosa
    ModuleNotFoundError: No module named 'librosa'
    

    Thank you so much.

    opened by GameO7er 1
  • ModuleNotFoundError: No module named 'numpy'

    ModuleNotFoundError: No module named 'numpy'

    I got this error when I just did follow your instruction in the Readme.md line by line. So I thought maybe this help others for running the script successfully.

    Traceback (most recent call last):
      File "/home/gameover/Projects/Python/Rhaspy/transcribe.py", line 8, in <module>
        import librosa
    ModuleNotFoundError: No module named 'numpy'
    

    Thank you so much.

    opened by GameO7er 1
  • Error using recipes

    Error using recipes

    Hello, Thanks for you great work for sharing this useful repo. I tried to use your recipes to train Persian data. In run.sh file, an error ocurred while adapting lm.arpa and creating G.fst:

    creating G.fst...
    arpa2fst -
    LOG (arpa2fst[5.5.0~1-2b62]:Read():arpa-file-parser.cc:94) Reading \data\ section.
    LOG (arpa2fst[5.5.0~1-2b62]:Read():arpa-file-parser.cc:149) Reading \1-grams: section.
    LOG (arpa2fst[5.5.0~1-2b62]:Read():arpa-file-parser.cc:149) Reading \2-grams: section.
    LOG (arpa2fst[5.5.0~1-2b62]:Read():arpa-file-parser.cc:149) Reading \3-grams: section.
    FATAL: FstCompiler: Bad number of columns, source = standard input, line = 28129
    ERROR: FstHeader::Read: Bad FST header: standard input
    

    full run.sh output is:

    Runtime configuration is: nJobs 12, nDecodeJobs 12. If this is not what you want, edit cmd.sh
    Starting at stage 0, train_stage -10
    
    Prepare phoneme data for Kaldi
    
    utils/prepare_lang.sh data/local/dict <unk> data/local/lang data/lang
    Checking data/local/dict/silence_phones.txt ...
    --> reading data/local/dict/silence_phones.txt
    --> text seems to be UTF-8 or ASCII, checking whitespaces
    --> text contains only allowed whitespaces
    --> data/local/dict/silence_phones.txt is OK
    
    Checking data/local/dict/optional_silence.txt ...
    --> reading data/local/dict/optional_silence.txt
    --> text seems to be UTF-8 or ASCII, checking whitespaces
    --> text contains only allowed whitespaces
    --> data/local/dict/optional_silence.txt is OK
    
    Checking data/local/dict/nonsilence_phones.txt ...
    --> reading data/local/dict/nonsilence_phones.txt
    --> text seems to be UTF-8 or ASCII, checking whitespaces
    --> text contains only allowed whitespaces
    --> data/local/dict/nonsilence_phones.txt is OK
    
    Checking disjoint: silence_phones.txt, nonsilence_phones.txt
    --> disjoint property is OK.
    
    Checking data/local/dict/lexicon.txt
    --> reading data/local/dict/lexicon.txt
    --> text seems to be UTF-8 or ASCII, checking whitespaces
    --> text contains only allowed whitespaces
    --> data/local/dict/lexicon.txt is OK
    
    Checking data/local/dict/extra_questions.txt ...
    --> reading data/local/dict/extra_questions.txt
    --> text seems to be UTF-8 or ASCII, checking whitespaces
    --> text contains only allowed whitespaces
    --> data/local/dict/extra_questions.txt is OK
    --> SUCCESS [validating dictionary directory data/local/dict]
    
    **Creating data/local/dict/lexiconp.txt from data/local/dict/lexicon.txt
    fstaddselfloops data/lang/phones/wdisambig_phones.int data/lang/phones/wdisambig_words.int
    prepare_lang.sh: validating output directory
    utils/validate_lang.pl data/lang
    Checking existence of separator file
    separator file data/lang/subword_separator.txt is empty or does not exist, deal in word case.
    Checking data/lang/phones.txt ...
    --> text seems to be UTF-8 or ASCII, checking whitespaces
    --> text contains only allowed whitespaces
    --> data/lang/phones.txt is OK
    
    Checking words.txt: #0 ...
    --> text seems to be UTF-8 or ASCII, checking whitespaces
    --> text contains only allowed whitespaces
    --> data/lang/words.txt is OK
    
    Checking disjoint: silence.txt, nonsilence.txt, disambig.txt ...
    --> silence.txt and nonsilence.txt are disjoint
    --> silence.txt and disambig.txt are disjoint
    --> disambig.txt and nonsilence.txt are disjoint
    --> disjoint property is OK
    
    Checking sumation: silence.txt, nonsilence.txt, disambig.txt ...
    --> found no unexplainable phones in phones.txt
    
    Checking data/lang/phones/context_indep.{txt, int, csl} ...
    --> text seems to be UTF-8 or ASCII, checking whitespaces
    --> text contains only allowed whitespaces
    --> 15 entry/entries in data/lang/phones/context_indep.txt
    --> data/lang/phones/context_indep.int corresponds to data/lang/phones/context_indep.txt
    --> data/lang/phones/context_indep.csl corresponds to data/lang/phones/context_indep.txt
    --> data/lang/phones/context_indep.{txt, int, csl} are OK
    
    Checking data/lang/phones/nonsilence.{txt, int, csl} ...
    --> text seems to be UTF-8 or ASCII, checking whitespaces
    --> text contains only allowed whitespaces
    --> 116 entry/entries in data/lang/phones/nonsilence.txt
    --> data/lang/phones/nonsilence.int corresponds to data/lang/phones/nonsilence.txt
    --> data/lang/phones/nonsilence.csl corresponds to data/lang/phones/nonsilence.txt
    --> data/lang/phones/nonsilence.{txt, int, csl} are OK
    
    Checking data/lang/phones/silence.{txt, int, csl} ...
    --> text seems to be UTF-8 or ASCII, checking whitespaces
    --> text contains only allowed whitespaces
    --> 15 entry/entries in data/lang/phones/silence.txt
    --> data/lang/phones/silence.int corresponds to data/lang/phones/silence.txt
    --> data/lang/phones/silence.csl corresponds to data/lang/phones/silence.txt
    --> data/lang/phones/silence.{txt, int, csl} are OK
    
    Checking data/lang/phones/optional_silence.{txt, int, csl} ...
    --> text seems to be UTF-8 or ASCII, checking whitespaces
    --> text contains only allowed whitespaces
    --> 1 entry/entries in data/lang/phones/optional_silence.txt
    --> data/lang/phones/optional_silence.int corresponds to data/lang/phones/optional_silence.txt
    --> data/lang/phones/optional_silence.csl corresponds to data/lang/phones/optional_silence.txt
    --> data/lang/phones/optional_silence.{txt, int, csl} are OK
    
    Checking data/lang/phones/disambig.{txt, int, csl} ...
    --> text seems to be UTF-8 or ASCII, checking whitespaces
    --> text contains only allowed whitespaces
    --> 14 entry/entries in data/lang/phones/disambig.txt
    --> data/lang/phones/disambig.int corresponds to data/lang/phones/disambig.txt
    --> data/lang/phones/disambig.csl corresponds to data/lang/phones/disambig.txt
    --> data/lang/phones/disambig.{txt, int, csl} are OK
    
    Checking data/lang/phones/roots.{txt, int} ...
    --> text seems to be UTF-8 or ASCII, checking whitespaces
    --> text contains only allowed whitespaces
    --> 32 entry/entries in data/lang/phones/roots.txt
    --> data/lang/phones/roots.int corresponds to data/lang/phones/roots.txt
    --> data/lang/phones/roots.{txt, int} are OK
    
    Checking data/lang/phones/sets.{txt, int} ...
    --> text seems to be UTF-8 or ASCII, checking whitespaces
    --> text contains only allowed whitespaces
    --> 32 entry/entries in data/lang/phones/sets.txt
    --> data/lang/phones/sets.int corresponds to data/lang/phones/sets.txt
    --> data/lang/phones/sets.{txt, int} are OK
    
    Checking data/lang/phones/extra_questions.{txt, int} ...
    --> text seems to be UTF-8 or ASCII, checking whitespaces
    --> text contains only allowed whitespaces
    --> 11 entry/entries in data/lang/phones/extra_questions.txt
    --> data/lang/phones/extra_questions.int corresponds to data/lang/phones/extra_questions.txt
    --> data/lang/phones/extra_questions.{txt, int} are OK
    
    Checking data/lang/phones/word_boundary.{txt, int} ...
    --> text seems to be UTF-8 or ASCII, checking whitespaces
    --> text contains only allowed whitespaces
    --> 131 entry/entries in data/lang/phones/word_boundary.txt
    --> data/lang/phones/word_boundary.int corresponds to data/lang/phones/word_boundary.txt
    --> data/lang/phones/word_boundary.{txt, int} are OK
    
    Checking optional_silence.txt ...
    --> reading data/lang/phones/optional_silence.txt
    --> data/lang/phones/optional_silence.txt is OK
    
    Checking disambiguation symbols: #0 and #1
    --> data/lang/phones/disambig.txt has "#0" and "#1"
    --> data/lang/phones/disambig.txt is OK
    
    Checking topo ...
    
    Checking word_boundary.txt: silence.txt, nonsilence.txt, disambig.txt ...
    --> data/lang/phones/word_boundary.txt doesn't include disambiguation symbols
    --> data/lang/phones/word_boundary.txt is the union of nonsilence.txt and silence.txt
    --> data/lang/phones/word_boundary.txt is OK
    
    Checking word-level disambiguation symbols...
    --> data/lang/phones/wdisambig.txt exists (newer prepare_lang.sh)
    Checking word_boundary.int and disambig.int
    --> generating a 35 word/subword sequence
    --> resulting phone sequence from L.fst corresponds to the word sequence
    --> L.fst is OK
    --> generating a 45 word/subword sequence
    --> resulting phone sequence from L_disambig.fst corresponds to the word sequence
    --> L_disambig.fst is OK
    
    Checking data/lang/oov.{txt, int} ...
    --> text seems to be UTF-8 or ASCII, checking whitespaces
    --> text contains only allowed whitespaces
    --> 1 entry/entries in data/lang/oov.txt
    --> data/lang/oov.int corresponds to data/lang/oov.txt
    --> data/lang/oov.{txt, int} are OK
    
    --> data/lang/L.fst is olabel sorted
    --> data/lang/L_disambig.fst is olabel sorted
    --> SUCCESS [validating lang directory data/lang]
    
    adapt our LM for kaldi...
    
    
    creating G.fst...
    arpa2fst -
    LOG (arpa2fst[5.5.0~1-2b62]:Read():arpa-file-parser.cc:94) Reading \data\ section.
    LOG (arpa2fst[5.5.0~1-2b62]:Read():arpa-file-parser.cc:149) Reading \1-grams: section.
    LOG (arpa2fst[5.5.0~1-2b62]:Read():arpa-file-parser.cc:149) Reading \2-grams: section.
    LOG (arpa2fst[5.5.0~1-2b62]:Read():arpa-file-parser.cc:149) Reading \3-grams: section.
    FATAL: FstCompiler: Bad number of columns, source = standard input, line = 28129
    ERROR: FstHeader::Read: Bad FST header: standard input
    
    make mfcc
    
    fix_data_dir.sh: kept all 12394 utterances.
    fix_data_dir.sh: old files are kept in data/train/.backup
    mkdir: cannot create directory 'data/train/wav.scp': File exists
    steps/make_mfcc.sh --cmd utils/run.pl --nj 12 data/train exp/make_mfcc_chain/train mfcc_chain
    utils/validate_data_dir.sh: Successfully validated data-directory data/train
    steps/make_mfcc.sh: [info]: no segments file exists: assuming wav.scp indexed by utterance.
    

    can you please help me fix this issue? thanks

    opened by MahdiEsrafili 0
Owner
Rhasspy
Offline voice assistant
Rhasspy
SECRET SANTA / KRIS KINGLE

SECRET SANTA / KRIS KINGLE Note: Before executing the script, make sure to turn

DEV_FINWIZ 10 Dec 06, 2022
ToDo - A simple bot to keep track of things you need to do

ToDo A simple bot to keep track of things you need to do. Installation You will

3 Sep 18, 2022
LibreMind is a free meditation app made in under 24 hours. It has various meditation, breathwork, and visualization exercises.

libreMind Meditation exercises What is it? LibreMind is a free meditation app made in under 24 hours. It has various meditation, breathwork, and visua

1 May 24, 2022
Ontario-Covid19-Screening - An automated Covid-19 School Screening Tool for Ontario

Ontario-Covid19-Screening An automated Covid-19 School Screening Tool for Ontari

Rayan K 0 Feb 20, 2022
A corona information module

A corona information module

Fayas Noushad 3 Nov 28, 2021
Holographic Declarative Memory for Python ACT-R

HDM This is the repository for the Holographic Declarative Memory (HDM) module for Python ACT-R. This repository contains: documentation: a paper, con

Carleton Cognitive Modeling Lab 1 Jan 17, 2022
Covid-19-Trends - A project that me and my friends created as the CSC110 Final Project at UofT

Covid-19-Trends Introduction The COVID-19 pandemic has caused severe financial s

1 Jan 07, 2022
🌌 Economics Observatory Visualisation Repository

Economics Observatory Visualisation Repository Website | Visualisations | Data | Here you will find all the data visualisations and infographics attac

Economics Observatory 3 Dec 14, 2022
Scitizen - Help scientific research for the benefit of mankind and humanity 🔬

Scitizen - Help scientific research for the benefit of mankind and humanity 🔬 Scitizen has been built from the ground up to give everyone the possibi

Pierre CORBEL 21 Mar 08, 2022
One destination for all the developer's learning resources.

DevResources One destination for all the developer's learning resources. Find all of your learning resources under one roof and add your own. Live ✨ Y

Gaurav Sharma 33 Oct 21, 2022
Web站点选优工具 - 优化GitHub的打开速度、高效Clone

QWebSiteOptimizer - Web站点速度选优工具 在访问GitHub等网站时,DNS解析到的IP地址可能并不是最快,过慢的节点会严重影响我们的访问情况,故制作出这样的工具来进一步优化网络质量。 由于该方案并非为VPN等方式进行的速度优化,以下几点需要您注意: 后续访问对应网站时仍可能需

QPT Family 15 May 01, 2022
Expose multicam options in the Blender VSE headers.

Multicam Expose multicam options in the Blender VSE headers. Install Download space_sequencer.py and swap it with the one that comes with the Blender

4 Feb 27, 2022
An event-based script that is designed to improve your aim

Aim-Trainer Info: This is an event-based script that is designed to improve a user's aim. It was built using Python Turtle and the Random library. Ins

Ethan Francolla 4 Feb 17, 2022
The docker-based Open edX distribution designed for peace of mind

Tutor: the docker-based Open edX distribution designed for peace of mind Tutor is a docker-based Open edX distribution, both for production and local

Overhang.IO 696 Dec 31, 2022
Generalise Prometheus metrics. takes out server specific, replaces variables and such.

Generalise Prometheus metrics. takes out server specific, replaces variables and such. makes it easier to copy from Prometheus console straight to Grafana.

ziv 5 Mar 28, 2022
An easy python calculator for those who want's to know how if statements, loops, and imports works give it a try!

A usefull calculator for any student or anyone who want's to know how to build a simple 2 mode python based calculator.

Antonio Sánchez 1 Jan 06, 2022
Camera track the tip of a pen to use as a drawing tablet

cablet Camera track the tip of a pen to use as a drawing tablet Setup You will need: Writing utensil with a colored tip (preferably blue or green) Bac

14 Feb 20, 2022
Digitales Raumbuch

Helios Digitales Raumbuch Settings Moved to settings. Basic Commands Setting Up Your Users To create a normal user account, just go to Sign Up and fil

1 Nov 19, 2021
GMHI: Gut Microbiome Health Index

GMHI: Gut Microbiome Health Index Description Gut Microbiome Health Index (GMHI)

Daniel Chang 2 Jun 30, 2022
Example platform plugin that fixes fentry calls in Binja

Example Binja Platform Plugin This is an example Binja platform plugin which fixes up linux kernel module calls to __fentry__. __fentry__ is the linux

_yrp 2 Oct 07, 2021