End-to-end text to speech system using gruut and onnx. There are 40 voices available across 8 languages.

Related tags

Text Data & NLPlarynx
Overview

Larynx

End-to-end text to speech system using gruut and onnx. There are 40 voices available across 8 languages.

$ docker run -it -p 5002:5002 rhasspy/larynx:en-us

Larynx screenshot

Larynx's goals are:

  • "Good enough" synthesis to avoid using a cloud service
  • Faster than realtime performance on a Raspberry Pi 4 (with low quality vocoder)
  • Broad language support (8 languages)
  • Voices trained purely from public datasets

Samples

Listen to voice samples from all of the pre-trained models.


Docker Installation

Pre-built Docker images for each language are available for the following platforms:

  • linux/amd64 - desktop/laptop/server
  • linux/arm64 - Raspberry Pi 64-bit
  • linux/arm/v7 - Raspberry Pi 32-bit

Run the Larynx web server with:

$ docker run -it -p 5002:5002 rhasspy/larynx:<LANG>

where is one of:

  • de-de - German
  • en-us - U.S. English
  • es-es - Spanish
  • fr-fr - French
  • it-it - Italian
  • nl - Dutch
  • ru-ru - Russian
  • sv-se - Swedish

Visit http://localhost:5002 for the test page. See http://localhost:5002/openapi/ for HTTP endpoint documentation.

A larger docker image with all languages is also available as rhasspy/larynx

Debian Installation

Pre-built Debian packages are available for download.

There are three different kinds of packages, so you can install exactly what you want and no more:

  • larynx-tts__.deb
    • Base Larynx code and dependencies (always required)
    • ARCH is one of amd64 (most desktops, laptops), armhf (32-bit Raspberry Pi), arm64 (64-bit Raspberry Pi)
  • larynx-tts-lang-__all.deb
    • Language-specific data files (at least one required)
    • See above for a list of languages
  • larynx-tts-voice-__all.deb
    • Voice-specific model files (at least one required)
    • See samples to decide which voice(s) to choose

As an example, let's say you want to use the "harvard-glow_tts" voice for English on an amd64 laptop for Larynx version 0.4.0. You would need to download these files:

  1. larynx-tts_0.4.0_amd64.deb
  2. larynx-tts-lang-en-us_0.4.0_all.deb
  3. larynx-tts-voice-en-us-harvard-glow-tts_0.4.0_all.deb

Once downloaded, you can install the packages all at once with:

sudo apt install \
  ./larynx-tts_0.4.0_amd64.deb \
  ./larynx-tts-lang-en-us_0.4.0_all.deb \
  ./larynx-tts-voice-en-us-harvard-glow-tts_0.4.0_all.deb

From there, you may run the larynx command or larynx-server to start the web server.

Python Installation

$ pip install larynx

For Raspberry Pi (ARM), you will first need to manually install phonetisaurus.

For 32-bit ARM systems, a pre-built onnxruntime wheel is available (official 64-bit wheels are available in PyPI).

Language Download

Larynx uses gruut to transform text into phonemes. You must install the appropriate gruut language before using Larynx. U.S. English is included with gruut, but for other languages:

$ python3 -m gruut <LANGUAGE> download

Voice/Vocoder Download

Voices and vocoders are available to download from the release page. They can be extracted anywhere, and the directory simply needs to be referenced in the command-line (e,g, --voices-dir /path/to/voices).


Web Server

You can run a local web server with:

$ python3 -m larynx.server --voices-dir /path/to/voices

Visit http://localhost:5002 to view the site and try out voices. See http://localhost/5002/openapi for documentation on the available HTTP endpoints.

The following default settings can be applied (for when they're not provided in an API call):

  • --quality - vocoder quality (high/medium/low, default: high)
  • --noise-scale - voice volatility (0-1, default: 0.333)
  • --length-scale - voice speed (<1 is faster, default: 1.0)

You may also set --voices-dir to change where your voices/vocoders are stored. The directory structure should be /.

See --help for more options.

MaryTTS Compatible API

To use Larynx as a drop-in replacement for a MaryTTS server (e.g., for use with Home Assistant), run:

$ docker run -it -p 59125:5002 rhasspy/larynx:<LANG>

The /process HTTP endpoint should now work for voices formatted as / such as en-us/harvard-glow_tts.

You can specify the vocoder by adding ; to the MaryTTS voice.

For example: en-us/harvard-glow_tts;hifi_gan:vctk_small will use the lowest quality (but fastest) vocoder. This is usually necessary to get decent performance on a Raspberry Pi.

Available vocoders are:

  • hifi_gan:universal_large (best quality, slowest, default)
  • hifi_gan:vctk_medium (medium quality)
  • hifi_gan:vctk_small (lowest quality, fastest)

Command-Line Example

The command below synthesizes multiple sentences and saves them to a directory. The --csv command-line flag indicates that each sentence is of the form id|text where id will be the name of the WAV file.

$ cat << EOF |
s01|The birch canoe slid on the smooth planks.
s02|Glue the sheet to the dark blue background.
s03|It's easy to tell the depth of a well.
s04|These days a chicken leg is a rare dish.
s05|Rice is often served in round bowls.
s06|The juice of lemons makes fine punch.
s07|The box was thrown beside the parked truck.
s08|The hogs were fed chopped corn and garbage.
s09|Four hours of steady work faced us.
s10|Large size in stockings is hard to sell.
EOF
  larynx \
    --debug \
    --csv \
    --voice harvard-glow_tts \
    --quality high \
    --output-dir wavs \
    --denoiser-strength 0.001

You can use the --interactive flag instead of --output-dir to type sentences and have the audio played immediately using the play command from sox.

GlowTTS Settings

The GlowTTS voices support two additional parameters:

  • --noise-scale - determines the speaker volatility during synthesis (0-1, default is 0.333)
  • --length-scale - makes the voice speaker slower (> 1) or faster (< 1)

Vocoder Settings

  • --denoiser-strength - runs the denoiser if > 0; a small value like 0.005 is recommended.

List Voices and Vocoders

$ larynx --list

Text to Speech Models

Vocoders

  • Hi-Fi GAN
    • Universal large
    • VCTK "medium"
    • VCTK "small"
  • WaveGlow
    • 256 channel trained on LJ Speech
Comments
  • Dot (.) stops synthesis

    Dot (.) stops synthesis

    I am new to Larynx, so maybe my question can be answered easily and quickly, but I couldn't find anything to fix it.

    Whenever a dot character is encountered, synthesis ends. I don't even need multiple sentences, but if it encounters something like X (feat. Y) it just says X feat. I am using Larynx over opentts in Home Assistant, but this can easily replicated in the GUI as well. So how exactly can I fix this? And maybe for later, how exactly can I synthesize multiple sentences? Thank you very much in advance, the voices are superb!

    bug 
    opened by chainria 7
  • MaryTTS API interface is not 100% compatible

    MaryTTS API interface is not 100% compatible

    Hi Michael,

    congratulations for your Larynx v1.0 release :partying_face: . Great work, as usual :slightly_smiling_face:.

    I've been trying to use Larynx with the new SEPIA v0.24.0 client since it has an option now to use MaryTTS compatible TTS systems directly, but encountered some issues:

    • The /voices endpoint is not delivering information in the same format. The MaryTTS API response is: [voice] [language] [gender] [tech=hmm] but Larynx is giving [laguage]/[voice]. Since I'm automatically parsing the string it currently fails to get the right language.
    • The /voices endpoint will show all voices including the ones that haven't been downloaded yet.
    • The Larynx quality parameter is not accessible.

    The last point is not really a MaryTTS compatibility issue, but it would be great to get each voice as 'low, medium, high' variation from the 'voices' endpoint, so the user could actually choose them from the list.

    I believe the Larynx MaryTTS endpoints are mostly for Home-Assistant support and I'm not sure how HA is parsing the voices list (maybe it doesn't parse it at all or just uses the whole string), but it would be great to get the original format from the /voices endpoint. Would you be willing to make these changes? :innocent: :grin:

    opened by fquirin 6
  • Required versions for python and pip

    Required versions for python and pip

    I have a working setup on a recent linux box (with python 3.8). But now I have to use an older computer (python 3.5, pip 8.1.1) and I run into trouble:

     Using cached https://files.pythonhosted.org/packages/f8/4d/a2.../larynx-0.3.1.tar.gz
     Complete output from command python setup.py egg_info:
     Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-build-sbulg1mn/larynx/setup.py", line 13
        long_description: str = ""
                        ^
     SyntaxError: invalid syntax
    

    What are the minimum versions required by larynx at the moment?

    opened by svenha 6
  • SSL error when downloading new tts

    SSL error when downloading new tts

    Steps to reproduce:

    1. Run larynx-server on NixOS with Docker
    2. Attempt to download a tts

    Full error output:

    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/app/.venv/lib/python3.7/site-packages/quart/app.py", line 1827, in full_dispatch_request
        result = await self.dispatch_request(request_context)
      File "/app/.venv/lib/python3.7/site-packages/quart/app.py", line 1875, in dispatch_request
        return await handler(**request_.view_args)
      File "/app/larynx/server.py", line 667, in api_download
        tts_model_dir = download_voice(voice_name, voices_dirs[0], url)
      File "/app/larynx/utils.py", line 78, in download_voice
        response = urllib.request.urlopen(link)
      File "/usr/lib/python3.7/urllib/request.py", line 222, in urlopen
        return opener.open(url, data, timeout)
      File "/usr/lib/python3.7/urllib/request.py", line 531, in open
        response = meth(req, response)
      File "/usr/lib/python3.7/urllib/request.py", line 641, in http_response
        'http', request, response, code, msg, hdrs)
      File "/usr/lib/python3.7/urllib/request.py", line 563, in error
        result = self._call_chain(*args)
      File "/usr/lib/python3.7/urllib/request.py", line 503, in _call_chain
        result = func(*args)
      File "/usr/lib/python3.7/urllib/request.py", line 755, in http_error_302
        return self.parent.open(new, timeout=req.timeout)
      File "/usr/lib/python3.7/urllib/request.py", line 525, in open
        response = self._open(req, data)
      File "/usr/lib/python3.7/urllib/request.py", line 543, in _open
        '_open', req)
      File "/usr/lib/python3.7/urllib/request.py", line 503, in _call_chain
        result = func(*args)
      File "/usr/lib/python3.7/urllib/request.py", line 1367, in https_open
        context=self._context, check_hostname=self._check_hostname)
      File "/usr/lib/python3.7/urllib/request.py", line 1326, in do_open
        raise URLError(err)
    urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1056)>
    
    bug 
    opened by 18fadly-anthony 5
  • German voice

    German voice

    Did you think about using pavoque-data for German voice? https://github.com/marytts/pavoque-data/releases/tag/v0.2 It has better quality in comparing with thorsten audio dataset. It has restriction for commercial using but as I see you already have a voice with that restriction.

    enhancement 
    opened by alt131 5
  • How to change port number when running from docker

    How to change port number when running from docker

    I'm using the docker command mentioned in https://github.com/rhasspy/larynx#docker-installation

    However, when I flip the port number in the docker run command, it doesn't work the way I want it to. May I know the right way to host larynx in a different port, if there is one

    opened by thevickypedia 4
  • MaryTTS emulation and Home Assistant

    MaryTTS emulation and Home Assistant

    I'm having trouble setting up the MaryTTS component in Home Assistant to work with Larynx. In particular, there are several parameters that can be defined in yaml. The docs give this example:

    tts:
      - platform: marytts
        host: "localhost"
        port: 59125
        codec: "WAVE_FILE"
        voice: "cmu-slt-hsmm"
        language: "en_US"
        effect:
          Volume: "amount:2.0;"
    

    Larynx is up and running and I can generate speech via localhost:59125. I'd like to use a specific voice and quality setting with Home Assistant's TTS. I tried setting the following:

    ...
        voice: "harvard-glow_tts"
        language: "en_us"
    ...
    

    But Home Assistant's log shows an error saying that "en_us" is not a valid language ("en_US" is, though).

    What are the correct parameters necessary to use a specific voice? And would it be possible to use an effect key to set the voice quality (high, medium, low)?

    enhancement 
    opened by hawkeye217 4
  • New languages need a link

    New languages need a link

    Thanks @synesthesiam for this excellent tool.

    I followed the 'Python installation' method (on Ubuntu 20.10) and added the language de-de via python3 -m gruut de-de download. Before I could use the new language, I had to add a link in ~/.local/lib/python3.8/site-packages/gruut/data/ to ~/.config/gruut/de-de; otherwise the new language was not found.

    bug 
    opened by svenha 3
  • How to send text to larynx SERVER using BASH script?

    How to send text to larynx SERVER using BASH script?

    Impressive program!

    Using a bash script, how do I send text to the larynx SERVER?

    When using the larynx CLI, there is a 5-15 second startup delay (which I'm trying to eliminate). So I'd like to start a server, then send the text to the server to avoid this startup delay.

    Unfortunately, I've failed to find documentation or an example on how to connect to the server using a bash script. I've experimented with the "--daemon" option, and studied the larynx-server example, but failed to find a solution. (Please forgive my inexperience with TTS.)

    I'd appreciate an example (or pointer to documentation) showing how to send text to the larynx server using a bash script. Thank you.

    opened by gvimlag 2
  • Version/tag mismatch when downloading voices for 1.0.0 release

    Version/tag mismatch when downloading voices for 1.0.0 release

    Looks like the Github version tag is 1.0 but the code is looking for 1.0.0. The assets exist on Github with 1.0 in the path, but I'm getting this error when trying to download voices from the web interface:

    larynx.utils.VoiceDownloadError: Failed to download voice en-us_kathleen-glow_tts from http://github.com/rhasspy/larynx/releases/download/v1.0.0/en-us_kathleen-glow_tts.tar.gz: HTTP Error 404: Not Found
    
    opened by hawkeye217 2
  • OpenAPI page broken

    OpenAPI page broken

    I am getting HTTP 500 returned when I go to http://localhost:5002/openapi/ - The browser page says "Fetch error undefined /openapi/swagger.json"

    On the command line, I tried find /usr/local/python3/ -name '*swagger*' and only got results for the swagger_ui package in site-packages.

    bug 
    opened by polski-g 2
  • Improve performance with caching

    Improve performance with caching

    I hope to gain some understanding about how feasible and useful it is to cache certain (intermediate) outputs. If large parts of phrases are re-used often, couldn't they be cached (perhaps on multiple levels) to improve response time? And if so, the cache could be pre-populated with expected outputs by speaking them all once. For example a program that reads the time of day could have a cache for 'The time is' as well as numbers up to 59. The expected reduction in response time would depend on which parts of the process actually take the most time, which I'm not sure about.

    opened by JeroenvdV 0
  • Dates like

    Dates like "1700s" and "1980s" are replaced with the current date

    I was really confused for a moment the first time I encountered this. :-)

    To reproduce:

    1. Run the Larynx server via the latest Docker container.
    2. Open the web interface. Paste a phrase like "It was popular in the distant past of the 1700s." and press "Speak".

    https://user-images.githubusercontent.com/3179832/199944142-5600025a-cc4a-4821-9be1-9bed10e5ecf9.mp4

    opened by dbohdan 0
  • voices-dir option of larynx.server doesn't work

    voices-dir option of larynx.server doesn't work

    I'm using larynx==1.1.0

    Even when I specified --voices-dir with directory which contains correct voice models, it seems not be used.

    python3 -m larynx.server --voices-dir /var/lib/larynx-model/voices
    
    DEBUG:larynx.utils:Downloading voice/vocoder for en-us_cmu_eey-glow_tts to /root/.local/share/larynx/voices from http://github.com/rhasspy/larynx/releases/download/v1.0/en-us_cmu_eey-glow_tts.tar.gz
    en-us_cmu_eey-glow_tts:  18%|████████████████████▍           
    
    opened by syundo0730 0
  • ImportError: cannot import name 'escape' from 'jinja2'

    ImportError: cannot import name 'escape' from 'jinja2'

    Python installation method results in the error in larynx-server on Ubuntu 22.04.1

    (larynx_venv) [email protected]:~/Downloads/larynx-master$ larynx-server 
    Traceback (most recent call last):
      File "/home/user/Downloads/larynx-master/larynx_venv/bin/larynx-server", line 5, in <module>
        from larynx.server.__main__ import main
      File "/home/user/Downloads/larynx-master/larynx_venv/lib/python3.10/site-packages/larynx/server.py", line 24, in <module>
        import quart_cors
      File "/home/user/Downloads/larynx-master/larynx_venv/lib/python3.10/site-packages/quart_cors/__init__.py", line 5, in <module>
        from quart import abort, Blueprint, current_app, make_response, Quart, request, Response, websocket
      File "/home/user/Downloads/larynx-master/larynx_venv/lib/python3.10/site-packages/quart/__init__.py", line 3, in <module>
        from jinja2 import escape, Markup
    ImportError: cannot import name 'escape' from 'jinja2' (/home/user/Downloads/larynx-master/larynx_venv/lib/python3.10/site-packages/jinja2/__init__.py)
    
    
    opened by faveoled 1
  • Browser request for favicon.ico returns HTTP 500 error and error on console

    Browser request for favicon.ico returns HTTP 500 error and error on console

    It would be nice to trap the favicon.ico HTTP request and return a simple icon so that the browser does not get a 500 error and the client does not log the error message.

    ERROR:larynx:404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
    Traceback (most recent call last):
      File "/home/larynx/app/.venv/lib/python3.9/site-packages/quart/app.py", line 1490, in full_dispatch_request
        result = await self.dispatch_request(request_context)
      File "/home/larynx/app/.venv/lib/python3.9/site-packages/quart/app.py", line 1530, in dispatch_request
        self.raise_routing_exception(request_)
      File "/home/larynx/app/.venv/lib/python3.9/site-packages/quart/app.py", line 1038, in raise_routing_exception
        raise request.routing_exception
      File "/home/larynx/app/.venv/lib/python3.9/site-packages/quart/ctx.py", line 64, in match_request
        ) = self.url_adapter.match(  # type: ignore
      File "/home/larynx/app/.venv/lib/python3.9/site-packages/werkzeug/routing.py", line 2041, in match
        raise NotFound()
    werkzeug.exceptions.NotFound: 404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
    
    opened by mrdvt92 0
Releases(v1.1)
Owner
Rhasspy
Offline voice assistant
Rhasspy
A collection of Classical Chinese natural language processing models, including Classical Chinese related models and resources on the Internet.

GuwenModels: 古文自然语言处理模型合集, 收录互联网上的古文相关模型及资源. A collection of Classical Chinese natural language processing models, including Classical Chinese related models and resources on the Internet.

Ethan 66 Dec 26, 2022
💛 Code and Dataset for our EMNLP 2021 paper: "Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes"

Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes Official PyTorch implementation and EmoCause evaluatio

Hyunwoo Kim 50 Dec 21, 2022
A library for end-to-end learning of embedding index and retrieval model

Poeem Poeem is a library for efficient approximate nearest neighbor (ANN) search, which has been widely adopted in industrial recommendation, advertis

54 Dec 21, 2022
A method to generate speech across multiple speakers

VoiceLoop PyTorch implementation of the method described in the paper VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop. VoiceLoop is a n

Facebook Archive 873 Dec 15, 2022
YACLC - Yet Another Chinese Learner Corpus

汉语学习者文本多维标注数据集YACLC V1.0 中文 | English 汉语学习者文本多维标注数据集(Yet Another Chinese Learner

BLCU-ICALL 47 Dec 15, 2022
DLO8012: Natural Language Processing & CSL804: Computational Lab - II

NATURAL-LANGUAGE-PROCESSING-AND-COMPUTATIONAL-LAB-II DLO8012: NLP & CSL804: CL-II [SEMESTER VIII] Syllabus NLP - Reference Books THE WALL MEGA SATISH

AMEY THAKUR 7 Apr 28, 2022
PyJPBoatRace: Python-based Japanese boatrace tools 🚤

pyjpboatrace :speedboat: provides you with useful tools for data analysis and auto-betting for boatrace.

5 Oct 29, 2022
BERTopic is a topic modeling technique that leverages 🤗 transformers and c-TF-IDF to create dense clusters allowing for easily interpretable topics whilst keeping important words in the topic descriptions

BERTopic BERTopic is a topic modeling technique that leverages 🤗 transformers and c-TF-IDF to create dense clusters allowing for easily interpretable

Maarten Grootendorst 3.6k Jan 07, 2023
fastNLP: A Modularized and Extensible NLP Framework. Currently still in incubation.

fastNLP fastNLP是一款轻量级的自然语言处理(NLP)工具包,目标是快速实现NLP任务以及构建复杂模型。 fastNLP具有如下的特性: 统一的Tabular式数据容器,简化数据预处理过程; 内置多种数据集的Loader和Pipe,省去预处理代码; 各种方便的NLP工具,例如Embedd

fastNLP 2.8k Jan 01, 2023
Unofficial Python library for using the Polish Wordnet (plWordNet / Słowosieć)

Polish Wordnet Python library Simple, easy-to-use and reasonably fast library for using the Słowosieć (also known as PlWordNet) - a lexico-semantic da

Max Adamski 12 Dec 23, 2022
Higher quality textures for the Metal Gear Solid series.

Metal Gear Solid: HD Textures Higher quality textures for the Metal Gear Solid series. The goal is to maximize the quality of assets that the engine w

Samantha 6 Dec 06, 2022
open-information-extraction-system, build open-knowledge-graph(SPO, subject-predicate-object) by pyltp(version==3.4.0)

中文开放信息抽取系统, open-information-extraction-system, build open-knowledge-graph(SPO, subject-predicate-object) by pyltp(version==3.4.0)

7 Nov 02, 2022
Fake Shakespearean Text Generator

Fake Shakespearean Text Generator This project contains an impelementation of stateful Char-RNN model to generate fake shakespearean texts. Files and

Recep YILDIRIM 1 Feb 15, 2022
Code and dataset for the EMNLP 2021 Finding paper "Can NLI Models Verify QA Systems’ Predictions?"

Code and dataset for the EMNLP 2021 Finding paper "Can NLI Models Verify QA Systems’ Predictions?"

Jifan Chen 22 Oct 21, 2022
NeMo: a toolkit for conversational AI

NVIDIA NeMo Introduction NeMo is a toolkit for creating Conversational AI applications. NeMo product page. Introductory video. The toolkit comes with

NVIDIA Corporation 5.3k Jan 04, 2023
SHAS: Approaching optimal Segmentation for End-to-End Speech Translation

SHAS: Approaching optimal Segmentation for End-to-End Speech Translation In this repo you can find the code of the Supervised Hybrid Audio Segmentatio

Machine Translation @ UPC 21 Dec 20, 2022
BERTAC (BERT-style transformer-based language model with Adversarially pretrained Convolutional neural network)

BERTAC (BERT-style transformer-based language model with Adversarially pretrained Convolutional neural network) BERTAC is a framework that combines a

6 Jan 24, 2022
Deep Learning for Natural Language Processing - Lectures 2021

This repository contains slides for the course "20-00-0947: Deep Learning for Natural Language Processing" (Technical University of Darmstadt, Summer term 2021).

0 Feb 21, 2022
🤗 The largest hub of ready-to-use NLP datasets for ML models with fast, easy-to-use and efficient data manipulation tools

🤗 The largest hub of ready-to-use NLP datasets for ML models with fast, easy-to-use and efficient data manipulation tools

Hugging Face 15k Jan 02, 2023