End-to-end text to speech system using gruut and onnx. There are 40 voices available across 8 languages.

Related tags

Text Data & NLPlarynx
Overview

Larynx

End-to-end text to speech system using gruut and onnx. There are 40 voices available across 8 languages.

$ docker run -it -p 5002:5002 rhasspy/larynx:en-us

Larynx screenshot

Larynx's goals are:

  • "Good enough" synthesis to avoid using a cloud service
  • Faster than realtime performance on a Raspberry Pi 4 (with low quality vocoder)
  • Broad language support (8 languages)
  • Voices trained purely from public datasets

Samples

Listen to voice samples from all of the pre-trained models.


Docker Installation

Pre-built Docker images for each language are available for the following platforms:

  • linux/amd64 - desktop/laptop/server
  • linux/arm64 - Raspberry Pi 64-bit
  • linux/arm/v7 - Raspberry Pi 32-bit

Run the Larynx web server with:

$ docker run -it -p 5002:5002 rhasspy/larynx:<LANG>

where is one of:

  • de-de - German
  • en-us - U.S. English
  • es-es - Spanish
  • fr-fr - French
  • it-it - Italian
  • nl - Dutch
  • ru-ru - Russian
  • sv-se - Swedish

Visit http://localhost:5002 for the test page. See http://localhost:5002/openapi/ for HTTP endpoint documentation.

A larger docker image with all languages is also available as rhasspy/larynx

Debian Installation

Pre-built Debian packages are available for download.

There are three different kinds of packages, so you can install exactly what you want and no more:

  • larynx-tts__.deb
    • Base Larynx code and dependencies (always required)
    • ARCH is one of amd64 (most desktops, laptops), armhf (32-bit Raspberry Pi), arm64 (64-bit Raspberry Pi)
  • larynx-tts-lang-__all.deb
    • Language-specific data files (at least one required)
    • See above for a list of languages
  • larynx-tts-voice-__all.deb
    • Voice-specific model files (at least one required)
    • See samples to decide which voice(s) to choose

As an example, let's say you want to use the "harvard-glow_tts" voice for English on an amd64 laptop for Larynx version 0.4.0. You would need to download these files:

  1. larynx-tts_0.4.0_amd64.deb
  2. larynx-tts-lang-en-us_0.4.0_all.deb
  3. larynx-tts-voice-en-us-harvard-glow-tts_0.4.0_all.deb

Once downloaded, you can install the packages all at once with:

sudo apt install \
  ./larynx-tts_0.4.0_amd64.deb \
  ./larynx-tts-lang-en-us_0.4.0_all.deb \
  ./larynx-tts-voice-en-us-harvard-glow-tts_0.4.0_all.deb

From there, you may run the larynx command or larynx-server to start the web server.

Python Installation

$ pip install larynx

For Raspberry Pi (ARM), you will first need to manually install phonetisaurus.

For 32-bit ARM systems, a pre-built onnxruntime wheel is available (official 64-bit wheels are available in PyPI).

Language Download

Larynx uses gruut to transform text into phonemes. You must install the appropriate gruut language before using Larynx. U.S. English is included with gruut, but for other languages:

$ python3 -m gruut <LANGUAGE> download

Voice/Vocoder Download

Voices and vocoders are available to download from the release page. They can be extracted anywhere, and the directory simply needs to be referenced in the command-line (e,g, --voices-dir /path/to/voices).


Web Server

You can run a local web server with:

$ python3 -m larynx.server --voices-dir /path/to/voices

Visit http://localhost:5002 to view the site and try out voices. See http://localhost/5002/openapi for documentation on the available HTTP endpoints.

The following default settings can be applied (for when they're not provided in an API call):

  • --quality - vocoder quality (high/medium/low, default: high)
  • --noise-scale - voice volatility (0-1, default: 0.333)
  • --length-scale - voice speed (<1 is faster, default: 1.0)

You may also set --voices-dir to change where your voices/vocoders are stored. The directory structure should be /.

See --help for more options.

MaryTTS Compatible API

To use Larynx as a drop-in replacement for a MaryTTS server (e.g., for use with Home Assistant), run:

$ docker run -it -p 59125:5002 rhasspy/larynx:<LANG>

The /process HTTP endpoint should now work for voices formatted as / such as en-us/harvard-glow_tts.

You can specify the vocoder by adding ; to the MaryTTS voice.

For example: en-us/harvard-glow_tts;hifi_gan:vctk_small will use the lowest quality (but fastest) vocoder. This is usually necessary to get decent performance on a Raspberry Pi.

Available vocoders are:

  • hifi_gan:universal_large (best quality, slowest, default)
  • hifi_gan:vctk_medium (medium quality)
  • hifi_gan:vctk_small (lowest quality, fastest)

Command-Line Example

The command below synthesizes multiple sentences and saves them to a directory. The --csv command-line flag indicates that each sentence is of the form id|text where id will be the name of the WAV file.

$ cat << EOF |
s01|The birch canoe slid on the smooth planks.
s02|Glue the sheet to the dark blue background.
s03|It's easy to tell the depth of a well.
s04|These days a chicken leg is a rare dish.
s05|Rice is often served in round bowls.
s06|The juice of lemons makes fine punch.
s07|The box was thrown beside the parked truck.
s08|The hogs were fed chopped corn and garbage.
s09|Four hours of steady work faced us.
s10|Large size in stockings is hard to sell.
EOF
  larynx \
    --debug \
    --csv \
    --voice harvard-glow_tts \
    --quality high \
    --output-dir wavs \
    --denoiser-strength 0.001

You can use the --interactive flag instead of --output-dir to type sentences and have the audio played immediately using the play command from sox.

GlowTTS Settings

The GlowTTS voices support two additional parameters:

  • --noise-scale - determines the speaker volatility during synthesis (0-1, default is 0.333)
  • --length-scale - makes the voice speaker slower (> 1) or faster (< 1)

Vocoder Settings

  • --denoiser-strength - runs the denoiser if > 0; a small value like 0.005 is recommended.

List Voices and Vocoders

$ larynx --list

Text to Speech Models

Vocoders

  • Hi-Fi GAN
    • Universal large
    • VCTK "medium"
    • VCTK "small"
  • WaveGlow
    • 256 channel trained on LJ Speech
Comments
  • Dot (.) stops synthesis

    Dot (.) stops synthesis

    I am new to Larynx, so maybe my question can be answered easily and quickly, but I couldn't find anything to fix it.

    Whenever a dot character is encountered, synthesis ends. I don't even need multiple sentences, but if it encounters something like X (feat. Y) it just says X feat. I am using Larynx over opentts in Home Assistant, but this can easily replicated in the GUI as well. So how exactly can I fix this? And maybe for later, how exactly can I synthesize multiple sentences? Thank you very much in advance, the voices are superb!

    bug 
    opened by chainria 7
  • MaryTTS API interface is not 100% compatible

    MaryTTS API interface is not 100% compatible

    Hi Michael,

    congratulations for your Larynx v1.0 release :partying_face: . Great work, as usual :slightly_smiling_face:.

    I've been trying to use Larynx with the new SEPIA v0.24.0 client since it has an option now to use MaryTTS compatible TTS systems directly, but encountered some issues:

    • The /voices endpoint is not delivering information in the same format. The MaryTTS API response is: [voice] [language] [gender] [tech=hmm] but Larynx is giving [laguage]/[voice]. Since I'm automatically parsing the string it currently fails to get the right language.
    • The /voices endpoint will show all voices including the ones that haven't been downloaded yet.
    • The Larynx quality parameter is not accessible.

    The last point is not really a MaryTTS compatibility issue, but it would be great to get each voice as 'low, medium, high' variation from the 'voices' endpoint, so the user could actually choose them from the list.

    I believe the Larynx MaryTTS endpoints are mostly for Home-Assistant support and I'm not sure how HA is parsing the voices list (maybe it doesn't parse it at all or just uses the whole string), but it would be great to get the original format from the /voices endpoint. Would you be willing to make these changes? :innocent: :grin:

    opened by fquirin 6
  • Required versions for python and pip

    Required versions for python and pip

    I have a working setup on a recent linux box (with python 3.8). But now I have to use an older computer (python 3.5, pip 8.1.1) and I run into trouble:

     Using cached https://files.pythonhosted.org/packages/f8/4d/a2.../larynx-0.3.1.tar.gz
     Complete output from command python setup.py egg_info:
     Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-build-sbulg1mn/larynx/setup.py", line 13
        long_description: str = ""
                        ^
     SyntaxError: invalid syntax
    

    What are the minimum versions required by larynx at the moment?

    opened by svenha 6
  • SSL error when downloading new tts

    SSL error when downloading new tts

    Steps to reproduce:

    1. Run larynx-server on NixOS with Docker
    2. Attempt to download a tts

    Full error output:

    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/app/.venv/lib/python3.7/site-packages/quart/app.py", line 1827, in full_dispatch_request
        result = await self.dispatch_request(request_context)
      File "/app/.venv/lib/python3.7/site-packages/quart/app.py", line 1875, in dispatch_request
        return await handler(**request_.view_args)
      File "/app/larynx/server.py", line 667, in api_download
        tts_model_dir = download_voice(voice_name, voices_dirs[0], url)
      File "/app/larynx/utils.py", line 78, in download_voice
        response = urllib.request.urlopen(link)
      File "/usr/lib/python3.7/urllib/request.py", line 222, in urlopen
        return opener.open(url, data, timeout)
      File "/usr/lib/python3.7/urllib/request.py", line 531, in open
        response = meth(req, response)
      File "/usr/lib/python3.7/urllib/request.py", line 641, in http_response
        'http', request, response, code, msg, hdrs)
      File "/usr/lib/python3.7/urllib/request.py", line 563, in error
        result = self._call_chain(*args)
      File "/usr/lib/python3.7/urllib/request.py", line 503, in _call_chain
        result = func(*args)
      File "/usr/lib/python3.7/urllib/request.py", line 755, in http_error_302
        return self.parent.open(new, timeout=req.timeout)
      File "/usr/lib/python3.7/urllib/request.py", line 525, in open
        response = self._open(req, data)
      File "/usr/lib/python3.7/urllib/request.py", line 543, in _open
        '_open', req)
      File "/usr/lib/python3.7/urllib/request.py", line 503, in _call_chain
        result = func(*args)
      File "/usr/lib/python3.7/urllib/request.py", line 1367, in https_open
        context=self._context, check_hostname=self._check_hostname)
      File "/usr/lib/python3.7/urllib/request.py", line 1326, in do_open
        raise URLError(err)
    urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1056)>
    
    bug 
    opened by 18fadly-anthony 5
  • German voice

    German voice

    Did you think about using pavoque-data for German voice? https://github.com/marytts/pavoque-data/releases/tag/v0.2 It has better quality in comparing with thorsten audio dataset. It has restriction for commercial using but as I see you already have a voice with that restriction.

    enhancement 
    opened by alt131 5
  • How to change port number when running from docker

    How to change port number when running from docker

    I'm using the docker command mentioned in https://github.com/rhasspy/larynx#docker-installation

    However, when I flip the port number in the docker run command, it doesn't work the way I want it to. May I know the right way to host larynx in a different port, if there is one

    opened by thevickypedia 4
  • MaryTTS emulation and Home Assistant

    MaryTTS emulation and Home Assistant

    I'm having trouble setting up the MaryTTS component in Home Assistant to work with Larynx. In particular, there are several parameters that can be defined in yaml. The docs give this example:

    tts:
      - platform: marytts
        host: "localhost"
        port: 59125
        codec: "WAVE_FILE"
        voice: "cmu-slt-hsmm"
        language: "en_US"
        effect:
          Volume: "amount:2.0;"
    

    Larynx is up and running and I can generate speech via localhost:59125. I'd like to use a specific voice and quality setting with Home Assistant's TTS. I tried setting the following:

    ...
        voice: "harvard-glow_tts"
        language: "en_us"
    ...
    

    But Home Assistant's log shows an error saying that "en_us" is not a valid language ("en_US" is, though).

    What are the correct parameters necessary to use a specific voice? And would it be possible to use an effect key to set the voice quality (high, medium, low)?

    enhancement 
    opened by hawkeye217 4
  • New languages need a link

    New languages need a link

    Thanks @synesthesiam for this excellent tool.

    I followed the 'Python installation' method (on Ubuntu 20.10) and added the language de-de via python3 -m gruut de-de download. Before I could use the new language, I had to add a link in ~/.local/lib/python3.8/site-packages/gruut/data/ to ~/.config/gruut/de-de; otherwise the new language was not found.

    bug 
    opened by svenha 3
  • How to send text to larynx SERVER using BASH script?

    How to send text to larynx SERVER using BASH script?

    Impressive program!

    Using a bash script, how do I send text to the larynx SERVER?

    When using the larynx CLI, there is a 5-15 second startup delay (which I'm trying to eliminate). So I'd like to start a server, then send the text to the server to avoid this startup delay.

    Unfortunately, I've failed to find documentation or an example on how to connect to the server using a bash script. I've experimented with the "--daemon" option, and studied the larynx-server example, but failed to find a solution. (Please forgive my inexperience with TTS.)

    I'd appreciate an example (or pointer to documentation) showing how to send text to the larynx server using a bash script. Thank you.

    opened by gvimlag 2
  • Version/tag mismatch when downloading voices for 1.0.0 release

    Version/tag mismatch when downloading voices for 1.0.0 release

    Looks like the Github version tag is 1.0 but the code is looking for 1.0.0. The assets exist on Github with 1.0 in the path, but I'm getting this error when trying to download voices from the web interface:

    larynx.utils.VoiceDownloadError: Failed to download voice en-us_kathleen-glow_tts from http://github.com/rhasspy/larynx/releases/download/v1.0.0/en-us_kathleen-glow_tts.tar.gz: HTTP Error 404: Not Found
    
    opened by hawkeye217 2
  • OpenAPI page broken

    OpenAPI page broken

    I am getting HTTP 500 returned when I go to http://localhost:5002/openapi/ - The browser page says "Fetch error undefined /openapi/swagger.json"

    On the command line, I tried find /usr/local/python3/ -name '*swagger*' and only got results for the swagger_ui package in site-packages.

    bug 
    opened by polski-g 2
  • Improve performance with caching

    Improve performance with caching

    I hope to gain some understanding about how feasible and useful it is to cache certain (intermediate) outputs. If large parts of phrases are re-used often, couldn't they be cached (perhaps on multiple levels) to improve response time? And if so, the cache could be pre-populated with expected outputs by speaking them all once. For example a program that reads the time of day could have a cache for 'The time is' as well as numbers up to 59. The expected reduction in response time would depend on which parts of the process actually take the most time, which I'm not sure about.

    opened by JeroenvdV 0
  • Dates like

    Dates like "1700s" and "1980s" are replaced with the current date

    I was really confused for a moment the first time I encountered this. :-)

    To reproduce:

    1. Run the Larynx server via the latest Docker container.
    2. Open the web interface. Paste a phrase like "It was popular in the distant past of the 1700s." and press "Speak".

    https://user-images.githubusercontent.com/3179832/199944142-5600025a-cc4a-4821-9be1-9bed10e5ecf9.mp4

    opened by dbohdan 0
  • voices-dir option of larynx.server doesn't work

    voices-dir option of larynx.server doesn't work

    I'm using larynx==1.1.0

    Even when I specified --voices-dir with directory which contains correct voice models, it seems not be used.

    python3 -m larynx.server --voices-dir /var/lib/larynx-model/voices
    
    DEBUG:larynx.utils:Downloading voice/vocoder for en-us_cmu_eey-glow_tts to /root/.local/share/larynx/voices from http://github.com/rhasspy/larynx/releases/download/v1.0/en-us_cmu_eey-glow_tts.tar.gz
    en-us_cmu_eey-glow_tts:  18%|████████████████████▍           
    
    opened by syundo0730 0
  • ImportError: cannot import name 'escape' from 'jinja2'

    ImportError: cannot import name 'escape' from 'jinja2'

    Python installation method results in the error in larynx-server on Ubuntu 22.04.1

    (larynx_venv) [email protected]:~/Downloads/larynx-master$ larynx-server 
    Traceback (most recent call last):
      File "/home/user/Downloads/larynx-master/larynx_venv/bin/larynx-server", line 5, in <module>
        from larynx.server.__main__ import main
      File "/home/user/Downloads/larynx-master/larynx_venv/lib/python3.10/site-packages/larynx/server.py", line 24, in <module>
        import quart_cors
      File "/home/user/Downloads/larynx-master/larynx_venv/lib/python3.10/site-packages/quart_cors/__init__.py", line 5, in <module>
        from quart import abort, Blueprint, current_app, make_response, Quart, request, Response, websocket
      File "/home/user/Downloads/larynx-master/larynx_venv/lib/python3.10/site-packages/quart/__init__.py", line 3, in <module>
        from jinja2 import escape, Markup
    ImportError: cannot import name 'escape' from 'jinja2' (/home/user/Downloads/larynx-master/larynx_venv/lib/python3.10/site-packages/jinja2/__init__.py)
    
    
    opened by faveoled 1
  • Browser request for favicon.ico returns HTTP 500 error and error on console

    Browser request for favicon.ico returns HTTP 500 error and error on console

    It would be nice to trap the favicon.ico HTTP request and return a simple icon so that the browser does not get a 500 error and the client does not log the error message.

    ERROR:larynx:404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
    Traceback (most recent call last):
      File "/home/larynx/app/.venv/lib/python3.9/site-packages/quart/app.py", line 1490, in full_dispatch_request
        result = await self.dispatch_request(request_context)
      File "/home/larynx/app/.venv/lib/python3.9/site-packages/quart/app.py", line 1530, in dispatch_request
        self.raise_routing_exception(request_)
      File "/home/larynx/app/.venv/lib/python3.9/site-packages/quart/app.py", line 1038, in raise_routing_exception
        raise request.routing_exception
      File "/home/larynx/app/.venv/lib/python3.9/site-packages/quart/ctx.py", line 64, in match_request
        ) = self.url_adapter.match(  # type: ignore
      File "/home/larynx/app/.venv/lib/python3.9/site-packages/werkzeug/routing.py", line 2041, in match
        raise NotFound()
    werkzeug.exceptions.NotFound: 404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
    
    opened by mrdvt92 0
Releases(v1.1)
Owner
Rhasspy
Offline voice assistant
Rhasspy
NLPShala , the best IDE for all Natural language processing tasks.

The revolutionary IDE for all NLP (Natural language processing) stuffs on the internet.

Abhi 3 Aug 08, 2021
NLP made easy

GluonNLP: Your Choice of Deep Learning for NLP GluonNLP is a toolkit that helps you solve NLP problems. It provides easy-to-use tools that helps you l

Distributed (Deep) Machine Learning Community 2.5k Jan 04, 2023
chaii - hindi & tamil question answering

chaii - hindi & tamil question answering This is the solution for rank 5th in Kaggle competition: chaii - Hindi and Tamil Question Answering. The comp

abhishek thakur 33 Dec 18, 2022
Code for the paper "VisualBERT: A Simple and Performant Baseline for Vision and Language"

This repository contains code for the following two papers: VisualBERT: A Simple and Performant Baseline for Vision and Language (arxiv) with a short

Natural Language Processing @UCLA 464 Jan 04, 2023
An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition

CRNN paper:An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition 1. create your ow

Tsukinousag1 3 Apr 02, 2022
PyTorch Implementation of Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation

StyleSpeech - PyTorch Implementation PyTorch Implementation of Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation. Status (2021.06.09

Keon Lee 142 Jan 06, 2023
:mag: Transformers at scale for question answering & neural search. Using NLP via a modular Retriever-Reader-Pipeline. Supporting DPR, Elasticsearch, HuggingFace's Modelhub...

Haystack is an end-to-end framework that enables you to build powerful and production-ready pipelines for different search use cases. Whether you want

deepset 6.4k Jan 09, 2023
Basic yet complete Machine Learning pipeline for NLP tasks

Basic yet complete Machine Learning pipeline for NLP tasks This repository accompanies the article on building basic yet complete ML pipelines for sol

Ivan 20 Aug 22, 2022
Indobenchmark are collections of Natural Language Understanding (IndoNLU) and Natural Language Generation (IndoNLG)

Indobenchmark Toolkit Indobenchmark are collections of Natural Language Understanding (IndoNLU) and Natural Language Generation (IndoNLG) resources fo

Samuel Cahyawijaya 11 Aug 26, 2022
Code release for NeX: Real-time View Synthesis with Neural Basis Expansion

NeX: Real-time View Synthesis with Neural Basis Expansion Project Page | Video | Paper | COLAB | Shiny Dataset We present NeX, a new approach to novel

537 Jan 05, 2023
NLP Overview

NLP-Overview Introduction The field of NPL encompasses a variety of topics which involve the computational processing and understanding of human langu

PeterPham 1 Jan 13, 2022
Yuqing Xie 2 Feb 17, 2022
PyTorch implementation and pretrained models for XCiT models. See XCiT: Cross-Covariance Image Transformer

Cross-Covariance Image Transformer (XCiT) PyTorch implementation and pretrained models for XCiT models. See XCiT: Cross-Covariance Image Transformer L

Facebook Research 605 Jan 02, 2023
Long text token classification using LongFormer

Long text token classification using LongFormer

abhishek thakur 161 Aug 07, 2022
NL. The natural language programming language.

NL A Natural-Language programming language. Built using Codex. A few examples are inside the nl_projects directory. How it works Write any code in pur

2 Jan 17, 2022
code for modular summarization work published in ACL2021 by Krishna et al

This repository contains the code for running modular summarization pipelines as described in the publication Krishna K, Khosla K, Bigham J, Lipton ZC

Approximately Correct Machine Intelligence (ACMI) Lab 21 Nov 24, 2022
Smart discord chatbot integrated with Dialogflow to manage different classrooms and assist in teaching!

smart-school-chatbot Smart discord chatbot integrated with Dialogflow to interact with students naturally and manage different classes in a school. De

Tom Huynh 5 Oct 24, 2022
A very simple framework for state-of-the-art Natural Language Processing (NLP)

A very simple framework for state-of-the-art NLP. Developed by Humboldt University of Berlin and friends. Flair is: A powerful NLP library. Flair allo

flair 12.3k Jan 02, 2023
iBOT: Image BERT Pre-Training with Online Tokenizer

Image BERT Pre-Training with iBOT Official PyTorch implementation and pretrained models for paper iBOT: Image BERT Pre-Training with Online Tokenizer.

Bytedance Inc. 435 Jan 06, 2023
This program do translate english words to portuguese

Python-Dictionary This program is used to translate english words to portuguese. Web-Scraping This program use BeautifulSoap to make web scraping, so

João Assalim 1 Oct 10, 2022