Unsupervised phone and word segmentation using dynamic programming on self-supervised VQ features.

Overview

Unsupervised Phone and Word Segmentation using Vector-Quantized Neural Networks

License: MIT

Overview

Unsupervised phone and word segmentation on speech data is performed. The experiments are described in:

  • H. Kamper, "Word segmentation on discovered phone units with dynamic programming and self-supervised scoring," arXiv preprint arXiv:2202.11929, 2022. [arXiv]
  • H. Kamper and B. van Niekerk, "Towards unsupervised phone and word segmentation using self-supervised vector-quantized neural networks," in Proc. Interspeech, 2021. [arXiv]

Please cite these papers if you use the code.

Dependencies

Dependencies can be installed in a conda environment:

conda env create -f environment.yml
conda activate dpdp

This does not include wordseg, which should be installed in its own environment according to its documentation.

Install the DPDP AE-RNN package:

git clone https://github.com/kamperh/dpdp_aernn.git ../dpdp_aernn

Minimal usage example: DPDP AE-RNN with DPDP CPC+K-means on Buckeye

In the sections that follow I give more complete details. In this section I briefly outline the sequence of steps that should reproduce the DPDP system results on Buckeye given in the paper. To apply the approach on other datasets you will need to carefully work through the subsequent sections, but I hope that this current section helps you to get going.

  1. Obtain the ground truth alignments for Buckeye provided in buckeye.zip as part of this release. Extract it into data/. There should now be a data/buckeye/ directory with the alignments.

  2. Extract CPC+K-means features for Buckeye. Do this by following the steps in the CPC-big subsection below.

  3. Perform acoustic unit discovery using DPDP CPC+K-means:

    ./vq_phoneseg.py --downsample_factor 1 --dur_weight 2 \
        --input_format=txt --algorithm=dp_penalized cpc_big buckeye val
    
  4. Perform word segmentation on the discovered units using the DPDP AE-RNN:

    ./vq_wordseg.py --algorithm=dpdp_aernn \
        cpc_big buckeye val phoneseg_dp_penalized
    
  5. Evaluate the segmentation:

    ./eval_segmentation.py cpc_big buckeye val \
        wordseg_dpdp_aernn_dp_penalized
    

The result should correspond approximately to the following on the Buckeye validation data:

---------------------------------------------------------------------------
Word boundaries:
Precision: 35.80%
Recall: 36.30%
F-score: 36.05%
OS: 1.40%
R-value: 45.13%
---------------------------------------------------------------------------
Word token boundaries:
Precision: 23.93%
Recall: 24.23%
F-score: 24.08%
OS: 1.24%
---------------------------------------------------------------------------

Example encodings: CPC-big features on Buckeye

Install the ZeroSpeech 2021 baseline system from my fork by following the steps in the installation section of the readme. Make sure that vqwordseg/ (this repository) and zerospeech2021_baseline/ are in the same directory.

From the vqwordseg/ directory, move to the ZeroSpeech 2021 directory:

cd ../zerospeech2021_baseline/

Extract individual Buckeye wav files:

./get_buckeye_wavs.py ../datasets/buckeye/

The argument should point to your local copy of Buckeye.

Encode the Buckeye data:

conda activate zerospeech2021_baseline
./encode.py wav/buckeye/val/ exp/buckeye/val/
./encode.py wav/buckeye/test/ exp/buckeye/test/

Move back and deactivate the environment:

cd ../vqwordseg/
conda deactivate

Dataset format and directory structure

This code should be usable with any dataset given that alignments and VQ encodings are provided.

For evaluation you need the ground truth phone and (optionally) word boundaries. These should be stored in the directories data/<dataset>/phone_intervals/ and data/<dataset>/word_intervals/ using the following filename format:

<speaker>_<utterance_id>_<start_frame>-<end_frame>.txt

E.g., data/buckeye/phone_intervals/s01_01a_003222-003256.txt could consist of:

0 5 hh
5 10 iy
10 15 jh
15 19 ih
19 27 s
27 34 s
34 46 iy
46 54 m
54 65 z
65 69 l
69 78 ay
78 88 k

The duration-penalized dynamic programming (DPDP) algorithms operate on the output vector quantized (VQ) models. The (pre-)quantized representations and code indices should be provided in the exp/ directory. These are used as input to the VQ-segmentation algorithms; the segmented output is also produced in exp/.

As an example, the directory exp/vqcpc/buckeye/ should contain a file embedding.npy, which is the codebook matrix for a VQ-CPC model trained on Buckeye. This matrix will have the shape [n_codes, code_dim]. The directory exp/vqcpc/buckeye/val/ needs to contain at least subdirectories for the encoded validation set:

  • prequant/
  • indices/

The prequant/ directory contains the encodings from the VQ model before quantization. These encodings are given as text files with an embedding per line, e.g. the first three lines of prequant/s01_01a_003222-003256.txt could be:

 0.1601707935333252 -0.0403369292616844  0.4687763750553131 ...
 0.4489639401435852  1.3353070020675659  1.0353083610534668 ...
-1.0552909374237061  0.6382007002830505  4.5256714820861816 ...

The indices/ directory contains the code indices to which the auxiliary embeddings are actually mapped, i.e. which of the codes in embedding.npy are closest (under some metric) to the pre-quantized embedding. The code indices are again given as text files, with each index on a new line, e.g. the first three lines of indices/s01_01a_003222-003256.txt could be:

423
381
119
...

Any VQ model can be used. In the preceding section section I gave an example of using CPC-big with K-means; in the section below I give an example of how VQ-VAE and VQ-CPC can be used to obtain codes for the Buckeye dataset. In the subsequent section DPDP segmentation is described.

Example encodings: VQ-VAE and VQ-CPC on Buckeye

You can obtain the VQ input representations using the file format indicated above. As an example, here I describe how I did it for the Buckeye data.

First the following repositories need to be installed with their dependencies:

If you made sure that the dependencies are satisfied, these packages can be installed locally by running ./install_local.sh.

Change directory to ../VectorQuantizedCPC and then perform the following steps there. Pre-process audio and extract log-Mel spectrograms:

./preprocess.py in_dir=../datasets/buckeye/ dataset=buckeye

Encode the data and write it to the vqwordseg/exp/ directory. This should be performed for all splits (train, val and test):

./encode.py checkpoint=checkpoints/cpc/english2019/model.ckpt-22000.pt \
    split=val \
    save_indices=True \
    save_auxiliary=True \
    save_embedding=../vqwordseg/exp/vqcpc/buckeye/embedding.npy \
    out_dir=../vqwordseg/exp/vqcpc/buckeye/val/ \
    dataset=buckeye

Change directory to ../VectorQuantizedVAE and then run the following there. The audio can be pre-processed again (as above), or alternatively you can simply link to the audio from VectorQuantizedCPC/:

ln -s ../VectorQuantizedCPC/datasets/ .

Encode the data and write it to the vqwordseg/exp/ directory. This should be performed for all splits (train, val and test):

# Buckeye
./encode.py checkpoint=checkpoints/2019english/model.ckpt-500000.pt \
    split=train \
    save_indices=True \
    save_auxiliary=True \
    save_embedding=../vqwordseg/exp/vqvae/buckeye/embedding.npy \
    out_dir=../vqwordseg/exp/vqvae/buckeye/train/ \
    dataset=buckeye

You can delete all the created auxiliary_embedding1/ and codes/ directories since these are not used for segmentation.

Phone segmentation

DP penalized segmentation:

# Buckeye (GMM)
./vq_phoneseg.py --downsample_factor 1 --input_format=npy \
    --algorithm=dp_penalized --dur_weight 0.001 \
    gmm buckeye val --output_tag phoneseg_merge

# Buckeye (VQ-CPC)
./vq_phoneseg.py --input_format=txt --algorithm=dp_penalized \
    vqcpc buckeye val

# Buckeye (VQ-VAE)
./vq_phoneseg.py vqvae buckeye val

# Buckeye (CPC-big)
./vq_phoneseg.py --downsample_factor 1 --dur_weight 2 --input_format=txt \
    --algorithm=dp_penalized cpc_big buckeye val

# Buckeye (CPC-big) HSMM
./vq_phoneseg.py --algorithm dp_penalized_hsmm --downsample_factor 1 \
    --dur_weight 1.0 --model_eos --dur_weight_func neg_log_gamma \
    --output_tag=phoneseg_hsmm_tune cpc_big buckeye val

# Buckeye Felix split (CPC-big) HSMM
./vq_phoneseg.py --algorithm dp_penalized_hsmm --downsample_factor 1 \
    --dur_weight 1.0 --model_eos --dur_weight_func neg_log_gamma \
    --output_tag=phoneseg_hsmm_tune cpc_big buckeye_felix test

# Xitsonga (CPC-big)
./vq_phoneseg.py --downsample_factor 1 --dur_weight 2 --input_format=txt \
    --algorithm=dp_penalized cpc_big xitsonga train

# Buckeye (XLSR)
./vq_phoneseg.py --downsample_factor 2 --dur_weight 2500 \
    --input_format=npy --algorithm=dp_penalized xlsr buckeye val

# Buckeye (ResDAVEnet-VQ)
./vq_phoneseg.py --downsample_factor 2 --dur_weight 3 --input_format=txt \
    --algorithm=dp_penalized resdavenet_vq buckeye val

# Buckeye (ResDAVEnet-VQ3)
./vq_phoneseg.py --downsample_factor 4 --dur_weight 0.001 \
    --input_format=txt --algorithm=dp_penalized resdavenet_vq_quant3 \
    buckeye val --output_tag=phoneseg_merge

# Buckeye Felix split (VQ-VAE)
./vq_phoneseg.py --output_tag=phoneseg_dp_penalized \
    vqvae buckeye_felix test

# Buckeye Felix split (CPC-big)
./vq_phoneseg.py  --downsample_factor 1 --dur_weight 2 \
    --output_tag=phoneseg_dp_penalized_tune cpc_big buckeye_felix val

# Buckeye Felix split (VQ-VAE) with Poisson duration prior
./vq_phoneseg.py --output_tag=phoneseg_dp_penalized_poisson \
    --dur_weight_func neg_log_poisson --dur_weight 2 \
    vqvae buckeye_felix val

# Buckeye (VQ-VAE) with Gamma duration prior
./vq_phoneseg.py --output_tag=phoneseg_dp_penalized_gamma \
    --dur_weight_func neg_log_gamma --dur_weight 15 vqvae buckeye val

# ZeroSpeech'17 English (CPC-big)
./vq_phoneseg.py --downsample_factor 1 --dur_weight 2 --input_format=txt \
    --algorithm=dp_penalized cpc_big zs2017_en train

# ZeroSpeech'17 French (CPC-big)
./vq_phoneseg.py --downsample_factor 1 --dur_weight 2 --input_format=txt \
    --algorithm=dp_penalized cpc_big zs2017_fr train

# ZeroSpeech'17 Mandarin (CPC-big)
./vq_phoneseg.py --downsample_factor 1 --dur_weight 2 --input_format=txt \
    --algorithm=dp_penalized cpc_big zs2017_zh train

# ZeroSpeech'17 French (XLSR)
./vq_phoneseg.py --downsample_factor 2 --dur_weight 1500 \
    --input_format=npy --algorithm=dp_penalized xlsr zs2017_fr train

# ZeroSpeech'17 Mandarin (XLSR)
./vq_phoneseg.py --downsample_factor 2 --dur_weight 2500 \
    --input_format=npy --algorithm=dp_penalized xlsr zs2017_zh train

# ZeroSpeech'17 Lang2 (CPC-big)
./vq_phoneseg.py --downsample_factor 1 --dur_weight 2 --input_format=txt \
    --algorithm=dp_penalized cpc_big zs2017_lang2 train

DP penalized N-seg. segmentation:

# Buckeye Felix split (VQ-VAE)
./vq_phoneseg.py --algorithm=dp_penalized_n_seg \
    --n_frames_per_segment=3 --n_min_segments=3 vqvae buckeye_felix test

Evaluate segmentation:

# Buckeye (VQ-VAE)
./eval_segmentation.py vqvae buckeye val phoneseg_dp_penalized_n_seg

# Buckeye (CPC-big)
./eval_segmentation.py cpc_big buckeye val phoneseg_dp_penalized

Word segmentation

Word segmentation are performed on the segmented phone sequences.

Adaptor grammar word segmentation:

conda activate wordseg
# Buckeye (VQ-VAE)
./vq_wordseg.py --algorithm=ag vqvae buckeye val phoneseg_dp_penalized

# Buckeye (CPC-big)
./vq_wordseg.py --algorithm=ag cpc_big buckeye val phoneseg_dp_penalized

DPDP AE-RNN word segmentation:

# Buckeye (GMM)
./vq_wordseg.py --dur_weight=6 --algorithm=dpdp_aernn \
    gmm buckeye val phoneseg_dp_penalized

# Buckeye (CPC-big)
./vq_wordseg.py --algorithm=dpdp_aernn \
    cpc_big buckeye val phoneseg_dp_penalized

Evaluate the segmentation:

# Buckeye (VQ-VAE)
./eval_segmentation.py vqvae buckeye val wordseg_ag_dp_penalized

# Buckeye (CPC-big)
./eval_segmentation.py cpc_big buckeye val wordseg_ag_dp_penalized

Evaluate the segmentation with the ZeroSpeech tools:

./intervals_to_zs.py cpc_big zs2017_zh train wordseg_segaernn_dp_penalized
cd ../zerospeech2017_eval/
ln -s \
    /media/kamperh/endgame/projects/stellenbosch/vqseg/vqwordseg/exp/cpc_big/zs2017_zh/train/wordseg_dpdp_aernn_dp_penalized/clusters.txt \
    2017/track2/mandarin.txt
conda activate zerospeech2020_updated
zerospeech2020-evaluate 2017-track2 . -l mandarin -o mandarin.json

Analysis

Print the word clusters:

./clusters_print.py cpc_big buckeye val wordseg_ag_dp_penalized

Listen to segmented codes:

./cluster_wav.py vqvae buckeye val phoneseg_dp_penalized 343
./cluster_wav.py vqvae buckeye val wordseg_tp_dp_penalized 486_
./cluster_wav.py cpc_big buckeye val phoneseg_dp_penalized 50

This requires sox and that you change the path at the beginning of cluster_wav.py. For ZeroSpeech'17 data, use cluster_wav_zs2017.py instead.

Synthesize an utterance:

./indices_to_txt.py vqvae buckeye val phoneseg_dp_penalized \
    s18_03a_025476-025541
cd ../VectorQuantizedVAE
./synthesize_codes.py checkpoints/2019english/model.ckpt-500000.pt \
    ../vqwordseg/s18_03a_025476-025541.txt
cd -

Complete example on ZeroSpeech data

An example of phone and word segmentation on the surprise language.

Encode data:

cd ../zerospeech2021_baseline
conda activate pytorch
./get_wavs.py path_to_data/datasets/zerospeech2020/2020/2017/ \
    zs2017_lang1 train

conda activate zerospeech2021_baseline
./encode.py wav/zs2017_lang1/train/ exp/zs2017_lang1/train/

Phone segmentation:

cd ../vqwordseg
conda activate pytorch
# Create links in exp/cpc_big/
./vq_phoneseg.py --downsample_factor 1 --dur_weight 2 --input_format=txt \
    --algorithm=dp_penalized cpc_big zs2017_lang1 train
./cluster_wav_zs2017.py cpc_big zs2017_lang1 train phoneseg_dp_penalized 3

Word segmentation:

./vq_wordseg.py --algorithm=dpdp_aernn cpc_big zs2017_lang1 train \
    phoneseg_dp_penalized
./cluster_wav_zs2017.py cpc_big zs2017_lang1 train \
    wordseg_dpdp_aernn_dp_penalized 33_10_11_14_1_34_

Convert to ZeroSpeech format:

./intervals_to_zs.py cpc_big zs2017_lang1 train \
    wordseg_dpdp_aernn_dp_penalized

About the Buckeye data splits

The particular split of Buckeye that I use in this repository is a legacy split with a somewhat complicated history. But in short the test set is exactly the same one used in the ZeroSpeech 2015 challenge. The remaining speakers were then used for a validation set and an additional held-out test set. This additional test set has the same number of speakers as the validation set, but most papers just report results on the ZeroSpeech 2105 test set.

The result is the following split of Buckeye, according to speaker:

  • Train (English1 in my thesis, devpart1 in other repos): s02, s03, s04, s05, s06, s08, s10, s11, s12, s13, s16, s38.
  • Validation (devpart2 in other repos): s17, s18, s19, s22, s34, s37, s39, s40.
  • Test (English2 in my thesis, ZS in other repos): s01, s20, s23, s24, s25, s26, s27, s29, s30, s31, s32, s33.
  • Additional test: s07, s09, s14, s15, s21, s28, s35, s36.

I fist used this in (Kamper et al., 2017) and since then in a number of follow-up papers. Others have also used this split, e.g. (Drexler and Glass, 2017), (Bhati et al., 2021), and ([Peng and Harwath, 2022]https://arxiv.org/abs/2203.15081)).

Sets used in this repo. In this repo I only make use of the validation and test sets above, although features are extracted for the training set. See the experimental setup section of the paper.

The Kreuk split. Note that Kreuk et al. (2020) uses a different split which is also used by others. So in the section in the paper where I compare to their approach, I use their split:

  • Train: All Buckeye speakers not below.
  • Validation: s25, s36, s39, s40.
  • Test: s03, s07, s31, s34.

This split is not included in this repository---it made things too cluttered. And note that in the the paper I again don't use the Kreuk training set: I only report results on the test data when comparing to their models.

Reducing a codebook using clustering

If a codebook is very large, the codes could be reduced by clustering. The reduced codebook should be saved in a new model directory, and links to the original pre-quantized features should be created.

As an example, in cluster_codebook.ipynb, the ResDAVEnet-VQ codebook is loaded and reduced to 50 codes. The original codebook had 1024 codes, but only 498 of these were actually used; these are reduced to 50. The resulting codebook is saved to exp/resdavenet_vq_clust50/buckeye/embedding.npy. The pre-quantized features are linked to the original version in exp/resdavenet_vq/. The indices from the original model shouldn't be linked, since these doesn't match the new codebook (but an indices file isn't necessary for running many of the phone segmentation algorithms).

Old work-flow (deprecated)

  1. Extract CPC+K-means features in ../zerospeech2021_baseline/.
  2. Perform phone segmentation here using vq_phoneseg.py.
  3. Move to ../seg_aernn/notebooks/ and perform word segmentation.
  4. Move back here and evaluate the segmentation using eval_segmentation.py.
  5. For ZeroSpeech systems, the evaluation is done in ../zerospeech2017_eval/.

Disclaimer

The code provided here is not pretty. But research should be reproducible. I provide no guarantees with the code, but please let me know if you have any problems, find bugs or have general comments.

You might also like...
A hobby project which includes a hand-gesture based virtual piano using a mobile phone camera and OpenCV library functions
A hobby project which includes a hand-gesture based virtual piano using a mobile phone camera and OpenCV library functions

Overview This is a hobby project which includes a hand-gesture controlled virtual piano using an android phone camera and some OpenCV library. My moti

ADB-IP-ROTATION - Use your mobile phone to gain a temporary IP address using ADB and data tethering

ADB IP ROTATE This an Python script based on Android Debug Bridge (adb) shell sc

Pytorch codes for
Pytorch codes for "Self-supervised Multi-view Stereo via Effective Co-Segmentation and Data-Augmentation"

Self-Supervised-MVS This repository is the official PyTorch implementation of our AAAI 2021 paper: "Self-supervised Multi-view Stereo via Effective Co

pytorch implementation of
pytorch implementation of "Contrastive Multiview Coding", "Momentum Contrast for Unsupervised Visual Representation Learning", and "Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination"

Unofficial implementation: MoCo: Momentum Contrast for Unsupervised Visual Representation Learning (Paper) InsDis: Unsupervised Feature Learning via N

Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation
Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation

CorDA Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation Prerequisite Please create and activate the follo

Self-supervised Augmentation Consistency for Adapting Semantic Segmentation (CVPR 2021)
Self-supervised Augmentation Consistency for Adapting Semantic Segmentation (CVPR 2021)

Self-supervised Augmentation Consistency for Adapting Semantic Segmentation This repository contains the official implementation of our paper: Self-su

Code for the paper One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation, CVPR 2021.

One Thing One Click One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation (CVPR2021) Code for the paper One Thi

ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation

ST++ This is the official PyTorch implementation of our paper: ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation. Lihe Ya

Self-supervised Multi-modal Hybrid Fusion Network for Brain Tumor Segmentation

JBHI-Pytorch This repository contains a reference implementation of the algorithms described in our paper "Self-supervised Multi-modal Hybrid Fusion N

Comments
  • Issues reproducing results on Buckeye

    Issues reproducing results on Buckeye

    Hi. I'm following the README directions closely but my ultimate results on the Buckeye validation set do not match was is reported here. They are as follows:

    Word boundaries:
    Precision: 24.10%
    Recall: 14.48%
    F-score: 18.09%
    OS: -39.93%
    R-value: 36.69%
    ---------------------------------------------------------------------------
    Word token boundaries:
    Precision: 18.75%
    Recall: 13.50%
    F-score: 15.70%
    OS: -28.02%
    

    There were a couple changes that had to be made to the codebase to get it to run so I wonder if one of these was breaking? Thanks in advance for your assistance!

    1. The provided link for the baseline CPC models is bad. I believe I found them here: https://github.com/zerospeech/zerospeech2021_baseline

    2. librosa is not installed by default and manually adding it usually results in an error since the librosa.output module is deprecated. In get_buckeye_wavs I replaced this with soundfile.write.

    3. It seems that vq_phoneseg.py, vq_wordseg.py, and eval_segmentations.py do not by default point to the correct data directories. These assume the exp directory is contained in vqwordseg; however, based on my usage it is actually contained in zerospeech2021_baseline. Also, there is additional subdirectory structure assumed in these scripts that is not by default included when the CPC embeddings are written. For example, the prequant subdirectory is something I had to manually add.

    4. When computing the distance between codebook entries and embeddings in algorithm.py the code throws an error stating the number of columns in the two arrays must match. I fixed the issues by transposing the embedding.

    opened by lstrgar 3
Releases(v1.0)
Owner
Herman Kamper
Herman Kamper
Addon and nodes for working with structural biology and molecular data in Blender.

Molecular Nodes 🧬 🔬 💻 Buy Me a Coffee to Keep Development Going! Join a Community of Blender SciVis People! What is Molecular Nodes? Molecular Node

Brady Johnston 456 Jan 08, 2023
DeepLab is a state-of-art deep learning system for semantic image segmentation built on top of Caffe.

DeepLab Introduction DeepLab is a state-of-art deep learning system for semantic image segmentation built on top of Caffe. It combines densely-compute

Ali 234 Nov 14, 2022
An efficient toolkit for Face Stylization based on the paper "AgileGAN: Stylizing Portraits by Inversion-Consistent Transfer Learning"

MMGEN-FaceStylor English | 简体中文 Introduction This repo is an efficient toolkit for Face Stylization based on the paper "AgileGAN: Stylizing Portraits

OpenMMLab 182 Dec 27, 2022
JUSTICE: A Benchmark Dataset for Supreme Court’s Judgment Prediction

JUSTICE: A Benchmark Dataset for Supreme Court’s Judgment Prediction CSCI 544 Final Project done by: Mohammed Alsayed, Shaayan Syed, Mohammad Alali, S

Smit Patel 3 Dec 28, 2022
BaseCls BaseCls 是一个基于 MegEngine 的预训练模型库,帮助大家挑选或训练出更适合自己科研或者业务的模型结构

BaseCls BaseCls 是一个基于 MegEngine 的预训练模型库,帮助大家挑选或训练出更适合自己科研或者业务的模型结构。 文档地址:https://basecls.readthedocs.io 安装 安装环境 BaseCls 需要 Python = 3.6。 BaseCls 依赖 M

MEGVII Research 28 Dec 23, 2022
A time series processing library

Timeseria Timeseria is a time series processing library which aims at making it easy to handle time series data and to build statistical and machine l

Stefano Alberto Russo 11 Aug 08, 2022
PaddleBoBo是基于PaddlePaddle和PaddleSpeech、PaddleGAN等开发套件的虚拟主播快速生成项目

PaddleBoBo - 元宇宙时代,你也可以动手做一个虚拟主播。 PaddleBoBo是基于飞桨PaddlePaddle深度学习框架和PaddleSpeech、PaddleGAN等开发套件的虚拟主播快速生成项目。PaddleBoBo致力于简单高效、可复用性强,只需要一张带人像的图片和一段文字,就能

502 Jan 08, 2023
Clustergram - Visualization and diagnostics for cluster analysis in Python

Clustergram Visualization and diagnostics for cluster analysis Clustergram is a diagram proposed by Matthias Schonlau in his paper The clustergram: A

Martin Fleischmann 96 Dec 26, 2022
Simple tool to combine(merge) onnx models. Simple Network Combine Tool for ONNX.

snc4onnx Simple tool to combine(merge) onnx models. Simple Network Combine Tool for ONNX. https://github.com/PINTO0309/simple-onnx-processing-tools 1.

Katsuya Hyodo 8 Oct 13, 2022
Some useful blender add-ons for SMPL skeleton's poses and global translation.

Blender add-ons for SMPL skeleton's poses and trans There are two blender add-ons for SMPL skeleton's poses and trans.The first is for making an offli

犹在镜中 154 Jan 04, 2023
Denoising images with Fourier Ring Correlation loss

Denoising images with Fourier Ring Correlation loss The python code accompanies the working manuscript Image quality measurements and denoising using

2 Mar 12, 2022
This is a repository for a semantic segmentation inference API using the OpenVINO toolkit

BMW-IntelOpenVINO-Segmentation-Inference-API This is a repository for a semantic segmentation inference API using the OpenVINO toolkit. It's supported

BMW TechOffice MUNICH 34 Nov 24, 2022
Template repository to build PyTorch projects from source on any version of PyTorch/CUDA/cuDNN.

The Ultimate PyTorch Source-Build Template Translations: 한국어 TL;DR PyTorch built from source can be x4 faster than a naïve PyTorch install. This repos

Joonhyung Lee/이준형 651 Dec 12, 2022
Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation

DynaBOA Code repositoty for the paper: Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation Shanyan Guan, Jingwei Xu, Michell

198 Dec 29, 2022
The implementation of "Optimizing Shoulder to Shoulder: A Coordinated Sub-Band Fusion Model for Real-Time Full-Band Speech Enhancement"

SF-Net for fullband SE This is the repo of the manuscript "Optimizing Shoulder to Shoulder: A Coordinated Sub-Band Fusion Model for Real-Time Full-Ban

Guochen Yu 36 Dec 02, 2022
Localizing Visual Sounds the Hard Way

Localizing-Visual-Sounds-the-Hard-Way Code and Dataset for "Localizing Visual Sounds the Hard Way". The repo contains code and our pre-trained model.

Honglie Chen 58 Dec 07, 2022
Cross Quality LFW: A database for Analyzing Cross-Resolution Image Face Recognition in Unconstrained Environments

Cross-Quality Labeled Faces in the Wild (XQLFW) Here, we release the database, evaluation protocol and code for the following paper: Cross Quality LFW

Martin Knoche 10 Dec 12, 2022
Repository to run object detection on a model trained on an autonomous driving dataset.

Autonomous Driving Object Detection on the Raspberry Pi 4 Description of Repository This repository contains code and instructions to configure the ne

Ethan 51 Nov 17, 2022
Label-Free Model Evaluation with Semi-Structured Dataset Representations

Label-Free Model Evaluation with Semi-Structured Dataset Representations Prerequisites This code uses the following libraries Python 3.7 NumPy PyTorch

8 Oct 06, 2022
The official implementation for "FQ-ViT: Fully Quantized Vision Transformer without Retraining".

FQ-ViT [arXiv] This repo contains the official implementation of "FQ-ViT: Fully Quantized Vision Transformer without Retraining". Table of Contents In

132 Jan 08, 2023