Korean Sentence Embedding Repository

Overview

Korean-Sentence-Embedding

🍭 Korean sentence embedding repository. You can download the pre-trained models and inference right away, also it provides environments where individuals can train models.

Baseline Models

Baseline models used for korean sentence embedding - KLUE-PLMs

Model Embedding size Hidden size # Layers # Heads
KLUE-BERT-base 768 768 12 12
KLUE-RoBERTa-base 768 768 12 12

NOTE: All the pretrained models are uploaded in Huggingface Model Hub. Check https://huggingface.co/klue.

How to start

  • Get datasets to train or test.
bash get_model_dataset.sh
  • If you want to do inference quickly, download the pre-trained models and then you can start some downstream tasks.
bash get_model_checkpoint.sh
cd KoSBERT/
python SemanticSearch.py

Available Models

  1. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks [SBERT]-[EMNLP 2019]
  2. SimCSE: Simple Contrastive Learning of Sentence Embeddings [SimCSE]-[EMNLP 2021]

KoSentenceBERT

  • πŸ€— Model Training
  • Dataset
    • Train: snli_1.0_train.ko.tsv (First phase, training NLI), sts-train.tsv (Second phase, continued training STS)
    • Valid: sts-dev.tsv
    • Test: sts-test.tsv

KoSimCSE

  • πŸ€— Model Training
  • Dataset
    • Train: snli_1.0_train.ko.tsv + multinli.train.ko.tsv
    • Valid: sts-dev.tsv
    • Test: sts-test.tsv

Performance

  • Semantic Textual Similarity test set results
Model Cosine Pearson Cosine Spearman Euclidean Pearson Euclidean Spearman Manhattan Pearson Manhattan Spearman Dot Pearson Dot Spearman
KoSBERT†SKT 78.81 78.47 77.68 77.78 77.71 77.83 75.75 75.22
KoSBERTbase 82.13 82.25 80.67 80.75 80.69 80.78 77.96 77.90
KoSRoBERTabase 80.70 81.03 80.97 81.06 80.84 80.97 79.20 78.93
KoSimCSE-BERT†SKT 82.12 82.56 81.84 81.63 81.99 81.74 79.55 79.19
KoSimCSE-BERTbase 82.73 83.51 82.32 82.78 82.43 82.88 77.86 76.70
KoSimCSE-RoBERTabase 83.64 84.05 83.32 83.84 83.33 83.79 80.92 79.84

Downstream Tasks

  • KoSBERT: Semantic Search, Clustering
python SemanticSearch.py
python Clustering.py
  • KoSimCSE: Semantic Search
python SemanticSearch.py

Semantic Search (KoSBERT)

from sentence_transformers import SentenceTransformer, util
import numpy as np

model_path = '../Checkpoint/KoSBERT/kosbert-klue-bert-base'

embedder = SentenceTransformer(model_path)

# Corpus with example sentences
corpus = ['ν•œ λ‚¨μžκ°€ μŒμ‹μ„ λ¨ΉλŠ”λ‹€.',
          'ν•œ λ‚¨μžκ°€ λΉ΅ ν•œ 쑰각을 λ¨ΉλŠ”λ‹€.',
          'κ·Έ μ—¬μžκ°€ 아이λ₯Ό λŒλ³Έλ‹€.',
          'ν•œ λ‚¨μžκ°€ 말을 탄닀.',
          'ν•œ μ—¬μžκ°€ λ°”μ΄μ˜¬λ¦°μ„ μ—°μ£Όν•œλ‹€.',
          '두 λ‚¨μžκ°€ 수레λ₯Ό 숲 μ†¦μœΌλ‘œ λ°€μ—ˆλ‹€.',
          'ν•œ λ‚¨μžκ°€ λ‹΄μœΌλ‘œ 싸인 λ•…μ—μ„œ 백마λ₯Ό 타고 μžˆλ‹€.',
          'μ›μˆ­μ΄ ν•œ λ§ˆλ¦¬κ°€ λ“œλŸΌμ„ μ—°μ£Όν•œλ‹€.',
          'μΉ˜νƒ€ ν•œ λ§ˆλ¦¬κ°€ 먹이 λ’€μ—μ„œ 달리고 μžˆλ‹€.']

corpus_embeddings = embedder.encode(corpus, convert_to_tensor=True)

# Query sentences:
queries = ['ν•œ λ‚¨μžκ°€ νŒŒμŠ€νƒ€λ₯Ό λ¨ΉλŠ”λ‹€.',
           '고릴라 μ˜μƒμ„ μž…μ€ λˆ„κ΅°κ°€κ°€ λ“œλŸΌμ„ μ—°μ£Όν•˜κ³  μžˆλ‹€.',
           'μΉ˜νƒ€κ°€ λ“€νŒμ„ κ°€λ‘œ 질러 먹이λ₯Ό μ«“λŠ”λ‹€.']

# Find the closest 5 sentences of the corpus for each query sentence based on cosine similarity
top_k = 5
for query in queries:
    query_embedding = embedder.encode(query, convert_to_tensor=True)
    cos_scores = util.pytorch_cos_sim(query_embedding, corpus_embeddings)[0]
    cos_scores = cos_scores.cpu()

    #We use np.argpartition, to only partially sort the top_k results
    top_results = np.argpartition(-cos_scores, range(top_k))[0:top_k]

    print("\n\n======================\n\n")
    print("Query:", query)
    print("\nTop 5 most similar sentences in corpus:")

    for idx in top_results[0:top_k]:
        print(corpus[idx].strip(), "(Score: %.4f)" % (cos_scores[idx]))
  • Results are as follows :

Query: ν•œ λ‚¨μžκ°€ νŒŒμŠ€νƒ€λ₯Ό λ¨ΉλŠ”λ‹€.

Top 5 most similar sentences in corpus:
ν•œ λ‚¨μžκ°€ μŒμ‹μ„ λ¨ΉλŠ”λ‹€. (Score: 0.6141)
ν•œ λ‚¨μžκ°€ λΉ΅ ν•œ 쑰각을 λ¨ΉλŠ”λ‹€. (Score: 0.5952)
ν•œ λ‚¨μžκ°€ 말을 탄닀. (Score: 0.1231)
ν•œ λ‚¨μžκ°€ λ‹΄μœΌλ‘œ 싸인 λ•…μ—μ„œ 백마λ₯Ό 타고 μžˆλ‹€. (Score: 0.0752)
두 λ‚¨μžκ°€ 수레λ₯Ό 숲 μ†¦μœΌλ‘œ λ°€μ—ˆλ‹€. (Score: 0.0486)


======================


Query: 고릴라 μ˜μƒμ„ μž…μ€ λˆ„κ΅°κ°€κ°€ λ“œλŸΌμ„ μ—°μ£Όν•˜κ³  μžˆλ‹€.

Top 5 most similar sentences in corpus:
μ›μˆ­μ΄ ν•œ λ§ˆλ¦¬κ°€ λ“œλŸΌμ„ μ—°μ£Όν•œλ‹€. (Score: 0.6656)
μΉ˜νƒ€ ν•œ λ§ˆλ¦¬κ°€ 먹이 λ’€μ—μ„œ 달리고 μžˆλ‹€. (Score: 0.2988)
ν•œ μ—¬μžκ°€ λ°”μ΄μ˜¬λ¦°μ„ μ—°μ£Όν•œλ‹€. (Score: 0.1566)
ν•œ λ‚¨μžκ°€ 말을 탄닀. (Score: 0.1112)
ν•œ λ‚¨μžκ°€ λ‹΄μœΌλ‘œ 싸인 λ•…μ—μ„œ 백마λ₯Ό 타고 μžˆλ‹€. (Score: 0.0262)


======================


Query: μΉ˜νƒ€κ°€ λ“€νŒμ„ κ°€λ‘œ 질러 먹이λ₯Ό μ«“λŠ”λ‹€.

Top 5 most similar sentences in corpus:
μΉ˜νƒ€ ν•œ λ§ˆλ¦¬κ°€ 먹이 λ’€μ—μ„œ 달리고 μžˆλ‹€. (Score: 0.7570)
두 λ‚¨μžκ°€ 수레λ₯Ό 숲 μ†¦μœΌλ‘œ λ°€μ—ˆλ‹€. (Score: 0.3658)
μ›μˆ­μ΄ ν•œ λ§ˆλ¦¬κ°€ λ“œλŸΌμ„ μ—°μ£Όν•œλ‹€. (Score: 0.3583)
ν•œ λ‚¨μžκ°€ 말을 탄닀. (Score: 0.0505)
κ·Έ μ—¬μžκ°€ 아이λ₯Ό λŒλ³Έλ‹€. (Score: -0.0087)

Clustering (KoSBERT)

from sentence_transformers import SentenceTransformer, util
import numpy as np

model_path = '../Checkpoint/KoSBERT/kosbert-klue-bert-base'

embedder = SentenceTransformer(model_path)

# Corpus with example sentences
corpus = ['ν•œ λ‚¨μžκ°€ μŒμ‹μ„ λ¨ΉλŠ”λ‹€.',
          'ν•œ λ‚¨μžκ°€ λΉ΅ ν•œ 쑰각을 λ¨ΉλŠ”λ‹€.',
          'κ·Έ μ—¬μžκ°€ 아이λ₯Ό λŒλ³Έλ‹€.',
          'ν•œ λ‚¨μžκ°€ 말을 탄닀.',
          'ν•œ μ—¬μžκ°€ λ°”μ΄μ˜¬λ¦°μ„ μ—°μ£Όν•œλ‹€.',
          '두 λ‚¨μžκ°€ 수레λ₯Ό 숲 μ†¦μœΌλ‘œ λ°€μ—ˆλ‹€.',
          'ν•œ λ‚¨μžκ°€ λ‹΄μœΌλ‘œ 싸인 λ•…μ—μ„œ 백마λ₯Ό 타고 μžˆλ‹€.',
          'μ›μˆ­μ΄ ν•œ λ§ˆλ¦¬κ°€ λ“œλŸΌμ„ μ—°μ£Όν•œλ‹€.',
          'μΉ˜νƒ€ ν•œ λ§ˆλ¦¬κ°€ 먹이 λ’€μ—μ„œ 달리고 μžˆλ‹€.',
          'ν•œ λ‚¨μžκ°€ νŒŒμŠ€νƒ€λ₯Ό λ¨ΉλŠ”λ‹€.',
          '고릴라 μ˜μƒμ„ μž…μ€ λˆ„κ΅°κ°€κ°€ λ“œλŸΌμ„ μ—°μ£Όν•˜κ³  μžˆλ‹€.',
          'μΉ˜νƒ€κ°€ λ“€νŒμ„ κ°€λ‘œ 질러 먹이λ₯Ό μ«“λŠ”λ‹€.']

corpus_embeddings = embedder.encode(corpus)

# Then, we perform k-means clustering using sklearn:
from sklearn.cluster import KMeans

num_clusters = 5
clustering_model = KMeans(n_clusters=num_clusters)
clustering_model.fit(corpus_embeddings)
cluster_assignment = clustering_model.labels_

clustered_sentences = [[] for i in range(num_clusters)]
for sentence_id, cluster_id in enumerate(cluster_assignment):
    clustered_sentences[cluster_id].append(corpus[sentence_id])

for i, cluster in enumerate(clustered_sentences):
    print("Cluster ", i+1)
    print(cluster)
    print("")
  • Results are as follows:
Cluster  1
['ν•œ λ‚¨μžκ°€ μŒμ‹μ„ λ¨ΉλŠ”λ‹€.', 'ν•œ λ‚¨μžκ°€ λΉ΅ ν•œ 쑰각을 λ¨ΉλŠ”λ‹€.', 'ν•œ λ‚¨μžκ°€ νŒŒμŠ€νƒ€λ₯Ό λ¨ΉλŠ”λ‹€.']

Cluster  2
['μ›μˆ­μ΄ ν•œ λ§ˆλ¦¬κ°€ λ“œλŸΌμ„ μ—°μ£Όν•œλ‹€.', '고릴라 μ˜μƒμ„ μž…μ€ λˆ„κ΅°κ°€κ°€ λ“œλŸΌμ„ μ—°μ£Όν•˜κ³  μžˆλ‹€.']

Cluster  3
['ν•œ λ‚¨μžκ°€ 말을 탄닀.', '두 λ‚¨μžκ°€ 수레λ₯Ό 숲 μ†¦μœΌλ‘œ λ°€μ—ˆλ‹€.', 'ν•œ λ‚¨μžκ°€ λ‹΄μœΌλ‘œ 싸인 λ•…μ—μ„œ 백마λ₯Ό 타고 μžˆλ‹€.']

Cluster  4
['μΉ˜νƒ€ ν•œ λ§ˆλ¦¬κ°€ 먹이 λ’€μ—μ„œ 달리고 μžˆλ‹€.', 'μΉ˜νƒ€κ°€ λ“€νŒμ„ κ°€λ‘œ 질러 먹이λ₯Ό μ«“λŠ”λ‹€.']

Cluster  5
['κ·Έ μ—¬μžκ°€ 아이λ₯Ό λŒλ³Έλ‹€.', 'ν•œ μ—¬μžκ°€ λ°”μ΄μ˜¬λ¦°μ„ μ—°μ£Όν•œλ‹€.']

References

@misc{park2021klue,
    title={KLUE: Korean Language Understanding Evaluation},
    author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jung-Woo Ha and Kyunghyun Cho},
    year={2021},
    eprint={2105.09680},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
@inproceedings{gao2021simcse,
   title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
   author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
   booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
   year={2021}
}
@article{ham2020kornli,
  title={KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding},
  author={Ham, Jiyeon and Choe, Yo Joong and Park, Kyubyong and Choi, Ilji and Soh, Hyungjoon},
  journal={arXiv preprint arXiv:2004.03289},
  year={2020}
}
@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "http://arxiv.org/abs/1908.10084",
}
Owner
Self-softmax
Ongoing research training transformer language models at scale, including: BERT & GPT-2

What is this fork of Megatron-LM and Megatron-DeepSpeed This is a detached fork of https://github.com/microsoft/Megatron-DeepSpeed, which in itself is

BigScience Workshop 316 Jan 03, 2023
Unofficial Python library for using the Polish Wordnet (plWordNet / Słowosieć)

Polish Wordnet Python library Simple, easy-to-use and reasonably fast library for using the Słowosieć (also known as PlWordNet) - a lexico-semantic da

Max Adamski 12 Dec 23, 2022
Multiple implementations for abstractive text summurization , using google colab

Text Summarization models if you are able to endorse me on Arxiv, i would be more than glad https://arxiv.org/auth/endorse?x=FRBB89 thanks This repo i

463 Dec 26, 2022
Rootski - Full codebase for rootski.io (without the data)

πŸ“£ Welcome to the Rootski codebase! This is the codebase for the application run

Eric 20 Nov 18, 2022
Final Project for the Intel AI Readiness Boot Camp NLP (Jan)

NLP Boot Camp (Jan) Synopsis Full Name: Prameya Mohanty Name of your School: Delhi Public School, Rourkela Class: VIII Title of the Project: iTransect

TheCodingHub 1 Feb 01, 2022
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

ALBERT ***************New March 28, 2020 *************** Add a colab tutorial to run fine-tuning for GLUE datasets. ***************New January 7, 2020

Google Research 3k Dec 26, 2022
A 10000+ hours dataset for Chinese speech recognition

A 10000+ hours dataset for Chinese speech recognition

309 Dec 16, 2022
translate using your voice

speech-to-text-translator Usage translate using your voice description this project makes translating a word easy, all you have to do is speak and...

1 Oct 18, 2021
IEEEXtreme15.0 Questions And Answers

IEEEXtreme15.0 Questions And Answers IEEEXtreme is a global challenge in which teams of IEEE Student members – advised and proctored by an IEEE member

Dilan Perera 15 Oct 24, 2022
Code for CodeT5: a new code-aware pre-trained encoder-decoder model.

CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation This is the official PyTorch implementation

Salesforce 564 Jan 08, 2023
Unofficial PyTorch implementation of Google AI's VoiceFilter system

VoiceFilter Note from Seung-won (2020.10.25) Hi everyone! It's Seung-won from MINDs Lab, Inc. It's been a long time since I've released this open-sour

MINDs Lab 881 Jan 03, 2023
Community and sentiment analysis based on tweets

The project has set itself the goal of analyzing the thoughts and interaction of Italian users through the social posts expressed through the Twitter platform on the day of the entry into force of th

3 Nov 17, 2022
[EMNLP 2021] LM-Critic: Language Models for Unsupervised Grammatical Error Correction

LM-Critic: Language Models for Unsupervised Grammatical Error Correction This repo provides the source code & data of our paper: LM-Critic: Language M

Michihiro Yasunaga 98 Nov 24, 2022
This project uses word frequency and Term Frequency-Inverse Document Frequency to summarize a text.

Text Summarizer This project uses word frequency and Term Frequency-Inverse Document Frequency to summarize a text. Team Members This mini-project was

1 Nov 16, 2021
A retro text-to-speech bot for Discord

hawking A retro text-to-speech bot for Discord, designed to work with all of the stuff you might've seen in Moonbase Alpha, using the existing command

Nick Schorr 23 Dec 25, 2022
Guide to using pre-trained large language models of source code

Large Models of Source Code I occasionally train and publicly release large neural language models on programs, including PolyCoder. Here, I describe

Vincent Hellendoorn 947 Dec 28, 2022
jiant is an NLP toolkit

jiant is an NLP toolkit The multitask and transfer learning toolkit for natural language processing research Why should I use jiant? jiant supports mu

MLΒ² AT CILVR 1.5k Jan 04, 2023
πŸš€Clone a voice in 5 seconds to generate arbitrary speech in real-time

English | δΈ­ζ–‡ Features 🌍 Chinese supported mandarin and tested with multiple datasets: aidatatang_200zh, magicdata, aishell3, data_aishell, and etc. ?

Vega 25.6k Dec 31, 2022
Practical Natural Language Processing Tools for Humans is build on the top of Senna Natural Language Processing (NLP)

Practical Natural Language Processing Tools for Humans is build on the top of Senna Natural Language Processing (NLP) predictions: part-of-speech (POS) tags, chunking (CHK), name entity recognition (

jawahar 20 Apr 30, 2022
This repo contains simple to use, pretrained/training-less models for speaker diarization.

PyDiar This repo contains simple to use, pretrained/training-less models for speaker diarization. Supported Models Binary Key Speaker Modeling Based o

12 Jan 20, 2022