Lingvo is a framework for building neural networks in Tensorflow, particularly sequence models.

Overview

Lingvo

PyPI Python

Documentation

License

What is it?

Lingvo is a framework for building neural networks in Tensorflow, particularly sequence models.

A list of publications using Lingvo can be found here.

Table of Contents

Releases

PyPI Version Commit
0.10.0 075fd1d88fa6f92681f58a2383264337d0e737ee
0.9.1 c1124c5aa7af13d2dd2b6d43293c8ca6d022b008
0.9.0 f826e99803d1b51dccbbbed1ef857ba48a2bbefe
Older releases

PyPI Version Commit
0.8.2 93e123c6788e934e6b7b1fd85770371becf1e92e
0.7.2 b05642fe386ee79e0d88aa083565c9a93428519e

Details for older releases are unavailable.

Major breaking changes

NOTE: this is not a comprehensive list. Lingvo releases do not offer any guarantees regarding backwards compatibility.

HEAD

Nothing here.

0.10.0

  • General
    • The theta_fn arg to CreateVariable() has been removed.

0.9.1

  • General
    • Python 3.9 is now supported.
    • ops.beam_search_step now takes and returns an additional arg beam_done.
    • The namedtuple beam_search_helper.BeamSearchDecodeOutput now removes the field done_hyps.

0.9.0

  • General
    • Tensorflow 2.5 is now the required version.
    • Python 3.5 support has been removed.
    • py_utils.AddGlobalVN and py_utils.AddPerStepVN have been combined into py_utils.AddVN.
    • BaseSchedule().Value() no longer takes a step arg.
    • Classes deriving from BaseSchedule should implement Value() not FProp().
    • theta.global_step has been removed in favor of py_utils.GetGlobalStep().
    • py_utils.GenerateStepSeedPair() no longer takes a global_step arg.
    • PostTrainingStepUpdate() no longer takes a global_step arg.
    • The fatal_errors argument to custom input ops now takes error message substrings rather than integer error codes.
Older releases

0.8.2

  • General
    • NestedMap Flatten/Pack/Transform/Filter etc now expand descendent dicts as well.
    • Subclasses of BaseLayer extending from abc.ABCMeta should now extend base_layer.ABCLayerMeta instead.
    • Trying to call self.CreateChild outside of __init__ now raises an error.
    • base_layer.initializer has been removed. Subclasses no longer need to decorate their __init__ function.
    • Trying to call self.CreateVariable outside of __init__ or _CreateLayerVariables now raises an error.
    • It is no longer possible to access self.vars or self.theta inside of __init__. Refactor by moving the variable creation and access to _CreateLayerVariables. The variable scope is set automatically according to the layer name in _CreateLayerVariables.

Details for older releases are unavailable.

Quick start

Installation

There are two ways to set up Lingvo: installing a fixed version through pip, or cloning the repository and building it with bazel. Docker configurations are provided for each case.

If you would just like to use the framework as-is, it is easiest to just install it through pip. This makes it possible to develop and train custom models using a frozen version of the Lingvo framework. However, it is difficult to modify the framework code or implement new custom ops.

If you would like to develop the framework further and potentially contribute pull requests, you should avoid using pip and clone the repository instead.

pip:

The Lingvo pip package can be installed with pip3 install lingvo.

See the codelab for how to get started with the pip package.

From sources:

The prerequisites are:

  • a TensorFlow 2.6 installation,
  • a C++ compiler (only g++ 7.3 is officially supported), and
  • the bazel build system.

Refer to docker/dev.dockerfile for a set of working requirements.

git clone the repository, then use bazel to build and run targets directly. The python -m module commands in the codelab need to be mapped onto bazel run commands.

docker:

Docker configurations are available for both situations. Instructions can be found in the comments on the top of each file.

How to install docker.

Running the MNIST image model

Preparing the input data

pip:

mkdir -p /tmp/mnist
python3 -m lingvo.tools.keras2ckpt --dataset=mnist

bazel:

mkdir -p /tmp/mnist
bazel run -c opt //lingvo/tools:keras2ckpt -- --dataset=mnist

The following files will be created in /tmp/mnist:

  • mnist.data-00000-of-00001: 53MB.
  • mnist.index: 241 bytes.

Running the model

pip:

cd /tmp/mnist
curl -O https://raw.githubusercontent.com/tensorflow/lingvo/master/lingvo/tasks/image/params/mnist.py
python3 -m lingvo.trainer --run_locally=cpu --mode=sync --model=mnist.LeNet5 --logdir=/tmp/mnist/log

bazel:

(cpu) bazel build -c opt //lingvo:trainer
(gpu) bazel build -c opt --config=cuda //lingvo:trainer
bazel-bin/lingvo/trainer --run_locally=cpu --mode=sync --model=image.mnist.LeNet5 --logdir=/tmp/mnist/log --logtostderr

After about 20 seconds, the loss should drop below 0.3 and a checkpoint will be saved, like below. Kill the trainer with Ctrl+C.

trainer.py:518] step:   205, steps/sec: 11.64 ... loss:0.25747201 ...
checkpointer.py:115] Save checkpoint
checkpointer.py:117] Save checkpoint done: /tmp/mnist/log/train/ckpt-00000205

Some artifacts will be produced in /tmp/mnist/log/control:

  • params.txt: hyper-parameters.
  • model_analysis.txt: model sizes for each layer.
  • train.pbtxt: the training tf.GraphDef.
  • events.*: a tensorboard events file.

As well as in /tmp/mnist/log/train:

  • checkpoint: a text file containing information about the checkpoint files.
  • ckpt-*: the checkpoint files.

Now, let's evaluate the model on the "Test" dataset. In the normal training setup the trainer and evaler should be run at the same time as two separate processes.

pip:

python3 -m lingvo.trainer --job=evaler_test --run_locally=cpu --mode=sync --model=mnist.LeNet5 --logdir=/tmp/mnist/log

bazel:

bazel-bin/lingvo/trainer --job=evaler_test --run_locally=cpu --mode=sync --model=image.mnist.LeNet5 --logdir=/tmp/mnist/log --logtostderr

Kill the job with Ctrl+C when it starts waiting for a new checkpoint.

base_runner.py:177] No new check point is found: /tmp/mnist/log/train/ckpt-00000205

The evaluation accuracy can be found slightly earlier in the logs.

base_runner.py:111] eval_test: step:   205, acc5: 0.99775392, accuracy: 0.94150388, ..., loss: 0.20770954, ...

Running the machine translation model

To run a more elaborate model, you'll need a cluster with GPUs. Please refer to third_party/py/lingvo/tasks/mt/README.md for more information.

Running the GShard transformer based giant language model

To train a GShard language model with one trillion parameters on GCP using CloudTPUs v3-512 using 512-way model parallelism, please refer to third_party/py/lingvo/tasks/lm/README.md for more information.

Running the 3d object detection model

To run the StarNet model using CloudTPUs on GCP, please refer to third_party/py/lingvo/tasks/car/README.md.

Models

Automatic Speech Recognition

Car

Image

Language Modelling

Machine Translation

References

Please cite this paper when referencing Lingvo.

@misc{shen2019lingvo,
    title={Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling},
    author={Jonathan Shen and Patrick Nguyen and Yonghui Wu and Zhifeng Chen and others},
    year={2019},
    eprint={1902.08295},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

License

Apache License 2.0

MoveNetを用いたPythonでの姿勢推定のデモ

MoveNet-Python-Example MoveNetのPythonでの動作サンプルです。 ONNXに変換したモデルも同梱しています。変換自体を試したい方はMoveNet_tf2onnx.ipynbを使用ください。 2021/08/24時点でTensorFlow Hubで提供されている以下モデ

KazuhitoTakahashi 38 Dec 17, 2022
KinectFusion implemented in Python with PyTorch

KinectFusion implemented in Python with PyTorch This is a lightweight Python implementation of KinectFusion. All the core functions (TSDF volume, fram

Jingwen Wang 80 Jan 03, 2023
Jingju baseline - A baseline model of our project of Beijing opera script generation

Jingju Baseline It is a baseline of our project about Beijing opera script gener

midon 1 Jan 14, 2022
ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021

ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021 Dataset Code Demos Authors: He Zhang, Yuting Ye, Tak

HE ZHANG 194 Dec 06, 2022
Level Based Customer Segmentation

level_based_customer_segmentation Level Based Customer Segmentation Persona Veri Seti kullanılarak müşteri segmentasyonu yapılmıştır. KOLONLAR : PRICE

Buse Yıldırım 6 Dec 21, 2021
Rule based classification A hotel s customers dataset

Rule-based-classification-A-hotel-s-customers-dataset- Aim: Categorize new customers by segment and predict how much revenue they can generate This re

Şebnem 4 Jan 02, 2022
EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21)

EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21) Citation If y

addisonwang 18 Nov 11, 2022
A crossplatform menu bar application using mpv as DLNA Media Renderer.

Macast Chinese README A menu bar application using mpv as DLNA Media Renderer. Install MacOS || Windows || Debian Download link: Macast release latest

4.4k Jan 01, 2023
The pyrelational package offers a flexible workflow to enable active learning with as little change to the models and datasets as possible

pyrelational is a python active learning library developed by Relation Therapeutics for rapidly implementing active learning pipelines from data management, model development (and Bayesian approximat

Relation Therapeutics 95 Dec 27, 2022
A curated list of automated deep learning (including neural architecture search and hyper-parameter optimization) resources.

Awesome AutoDL A curated list of automated deep learning related resources. Inspired by awesome-deep-vision, awesome-adversarial-machine-learning, awe

D-X-Y 2k Dec 30, 2022
Code for MarioNette: Self-Supervised Sprite Learning, in NeurIPS 2021

MarioNette | Webpage | Paper | Video MarioNette: Self-Supervised Sprite Learning Dmitriy Smirnov, Michaël Gharbi, Matthew Fisher, Vitor Guizilini, Ale

Dima Smirnov 28 Nov 18, 2022
Serverless proxy for Spark cluster

Hydrosphere Mist Hydrosphere Mist is a serverless proxy for Spark cluster. Mist provides a new functional programming framework and deployment model f

hydrosphere.io 317 Dec 01, 2022
CAPRI: Context-Aware Interpretable Point-of-Interest Recommendation Framework

CAPRI: Context-Aware Interpretable Point-of-Interest Recommendation Framework This repository contains a framework for Recommender Systems (RecSys), a

RecSys Lab 8 Jul 03, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

SMPLify-XMC This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright Lic

Lea Müller 83 Dec 14, 2022
🗣️ Microsoft Edge TTS for Home Assistant, no need for app_key

Microsoft Edge TTS for Home Assistant This component is based on the TTS service of Microsoft Edge browser, no need to apply for app_key. Install Down

152 Dec 31, 2022
PyTorch implementation of D2C: Diffuison-Decoding Models for Few-shot Conditional Generation.

D2C: Diffuison-Decoding Models for Few-shot Conditional Generation Project | Paper PyTorch implementation of D2C: Diffuison-Decoding Models for Few-sh

Jiaming Song 90 Dec 27, 2022
Implementation of ViViT: A Video Vision Transformer

ViViT: A Video Vision Transformer Unofficial implementation of ViViT: A Video Vision Transformer. Notes: This is in WIP. Model 2 is implemented, Model

Rishikesh (ऋषिकेश) 297 Jan 06, 2023
CVPR 2020 oral paper: Overcoming Classifier Imbalance for Long-tail Object Detection with Balanced Group Softmax.

Overcoming Classifier Imbalance for Long-tail Object Detection with Balanced Group Softmax ⚠️ Latest: Current repo is a complete version. But we delet

FishYuLi 341 Dec 23, 2022
Fuzzing tool (TFuzz): a fuzzing tool based on program transformation

T-Fuzz T-Fuzz consists of 2 components: Fuzzing tool (TFuzz): a fuzzing tool based on program transformation Crash Analyzer (CrashAnalyzer): a tool th

HexHive 244 Nov 09, 2022
Code for "Optimizing risk-based breast cancer screening policies with reinforcement learning"

Tempo: Optimizing risk-based breast cancer screening policies with reinforcement learning Introduction This repository was used to develop Tempo, as d

Adam Yala 12 Oct 11, 2022