A collection of Korean Text Datasets ready to use using Tensorflow-Datasets.

Overview

tfds-korean

A collection of Korean Text Datasets ready to use using Tensorflow-Datasets.

TensorFlow-Datasets를 이용한 한국어/한글 데이터셋 모음입니다.

Dataset Catalog | pypi

PyPI - License PyPI Test Python

Usage

Installation

pip install tfds-korean

Loading dataset

import tensorflow_datasets as tfds
import tfds_korean.nsmc # register nsmc dataset

ds = tfds.load('nsmc')

train_ds = ds['train'].batch(32)
test_ds = ds['test'].batch(128)

# define model
# ....
# ....

model.fit(train_ds)
model.evaluate(test_ds)

See Dataset Catalog page for dataset list and details of each dataset.

Examples

Licenses

The license for this repository and licenses for datasets are applied separately. It is recommended to use each dataset after checking the dataset's website.

본 레포지토리의 라이선스와 데이터셋의 라이선스는 별도로 적용됩니다. 데이터셋을 사용하기 전 각 데이터셋의 라이선스와 웹 사이트를 확인 후 사용하시길 권해드리며, 본 라이브러리는 데이터셋을 호스팅하거나 배포하지 않는 점을 참고부탁드립니다.

Comments
  • [Dataset Request] sae4k

    [Dataset Request] sae4k

    Dataset Information

    • Dataset Name:
    • Prefered code name(e.g. korean_chatbot_qa_data): sae4k
    • Dataset description:
    • Homepage: https://github.com/warnikchow/sae4k
    • Citation:

    Additional Context

    dataset request 
    opened by jeongukjae 2
  • [Dataset Request] namuwiki corpus

    [Dataset Request] namuwiki corpus

    Dataset Information

    • Dataset Name: namuwiki corpus
    • Prefered code name(e.g. korean_chatbot_qa_data):
    • Dataset description:
    • Homepage: https://github.com/jeongukjae/namuwiki-corpus
    • Citation:
    • License:

    Additional Context

    문장 단위 분절해놓은 나무위키 코퍼스

    dataset request 
    opened by jeongukjae 1
  • [Dataset Request] korean wikipedia corpus

    [Dataset Request] korean wikipedia corpus

    Dataset Information

    • Dataset Name: 한국어 위키피디아 코퍼스
    • Prefered code name(e.g. korean_chatbot_qa_data): korean_wikipedia_corpus
    • Dataset description:
    • Homepage: https://github.com/jeongukjae/korean-wikipedia-corpus
    • Citation:
    • License:

    Additional Context

    kowikitext도 충분히 좋지만, 문장단위로 사용할 때 불편한 점이 있다. 그래서 문장단위로 이미 나누어진 말뭉치를 한국어 위키피디아 덤프에서 하나 생성. (kss로 분절)

    FeaturesDict({
        'content': Sequence(Text(shape=(), dtype=tf.string)),
        'title': Text(shape=(), dtype=tf.string),
    })
    

    요런식으로 content가 TensorSpec(shape=[None], dtype=tf.string)인 텐서값을 가지도록 만들어주면 distillation이나 문장 단위 unsupervised learning이나 할 때 편할 것 같아요.

    dataset request before-release 
    opened by jeongukjae 1
  • [Dataset Request] KLUE

    [Dataset Request] KLUE

    Dataset Information

    • Dataset Name: KLUE
    • Prefered code name(e.g. korean_chatbot_qa_data): klue_dp, klue_mrc, ...
    • Dataset description:
    • Homepage:
    • Citation:
    • License:

    Additional Context

    https://github.com/KLUE-benchmark/KLUE https://arxiv.org/pdf/2105.09680v1.pdf

    • [x] dp @jeongukjae
    • [x] mrc @harrydrippin
    • [x] ner @jeongukjae
    • [x] nli @jeongukjae
    • [x] re @jeongukjae
    • [x] sts @jeongukjae
    • [x] wos @jeongukjae
    • [x] ynat @jeongukjae
    dataset request before-release 
    opened by jeongukjae 1
  • [Dataset Request] namuwikitext

    [Dataset Request] namuwikitext

    Dataset Information

    • Dataset Name: Wikitext format dataset of Namuwiki
    • Prefered code name(e.g. korean_chatbot_qa_data): namuwikitext
    • Dataset description: 나무위키의 덤프 데이터를 바탕을 제작한 wikitext 형식의 텍스트 파일입니다. 학습 및 평가를 위하여 위키페이지 별로 train (99%), dev (0.5%), test (0.5%) 로 나뉘어져있습니다.
    • Homepage: https://github.com/lovit/namuwikitext
    • Citation:

    Additional Context

    https://github.com/lovit/namuwikitext/issues/10

    README에 있는 데이터셋 개수와 맞지 않아 이렇게 이슈 작성을 해놓았는데, 답변은 없는 상황임. 일단 Korpora에 있는 대로 추가해놓고 나중에 다시 수정하는 것이 좋지 않을까

    dataset request 
    opened by jeongukjae 1
  • [Dataset Request] KorQuAD

    [Dataset Request] KorQuAD

    Dataset Information

    • Dataset Name: KorQuAD 1.0
    • Prefered code name(e.g. korean_chatbot_qa_data): korquad_10
    • Dataset description: KorQuAD 1.0은 한국어 Machine Reading Comprehension을 위해 만든 데이터셋입니다. 모든 질의에 대한 답변은 해당 Wikipedia article 문단의 일부 하위 영역으로 이루어집니다. Stanford Question Answering Dataset(SQuAD) v1.0과 동일한 방식으로 구성되었습니다.
    • Homepage: https://korquad.github.io/KorQuad%201.0/
    • Citation:

    Dataset Information

    • Dataset Name: KorQuAD 2.0
    • Prefered code name(e.g. korean_chatbot_qa_data): korquad_20
    • Dataset description: KorQuAD 2.0은 KorQuAD 1.0에서 질문답변 20,000+ 쌍을 포함하여 총 100,000+ 쌍으로 구성된 한국어 Machine Reading Comprehension 데이터셋 입니다. KorQuAD 1.0과는 다르게 1~2 문단이 아닌 Wikipedia article 전체에서 답을 찾아야 합니다. 매우 긴 문서들이 있기 때문에 탐색 시간에 대한 고려가 필요할 것 입니다. 또한 표와 리스트도 포함되어 있기 때문에 HTML tag를 통한 문서의 구조 이해도 필요합니다. 이 데이터셋을 통해서 다양한 형태와 길이의 문서들에서도 기계독해가 가능해질 것 입니다.
    • Homepage: https://korquad.github.io
    • Citation:

    Additional Context

    일단은 KorQuAD 1.0만 추가해놓고 2.0은 추후에 추가해도 무방할 듯

    dataset request before-release 
    opened by jeongukjae 1
  • [Dataset Request] 한국해양대학교 NER 데이터셋

    [Dataset Request] 한국해양대학교 NER 데이터셋

    Dataset Information

    • Dataset Name: 한국해양대학교 자연언어처리 연구실 NER 데이터셋
    • Prefered code name(e.g. korean_chatbot_qa_data): kmounlp_ner
    • Dataset description: 한국어 개체명 정의 및 표지 표준화 기술보고서와 이를 기반으로 제작된 개체명 형태소 말뭉치
    • Homepage: https://github.com/kmounlp/NER
    • Citation:

    Additional Context

    보고서: https://github.com/kmounlp/NER/blob/master/NER%20Guideline%20(ver%201.0).pdf

    dataset request 
    opened by jeongukjae 1
  • Add CONTRIBUTING.md

    Add CONTRIBUTING.md

    • [ ] 프로젝트에서 사용하는 언어에 대한 설명. 사용법/데이터셋 설명은 가능하면 영어로 적되, 이슈/PR 소통은 한국어로 하는게 좋지 않을까?
    • [ ] 데이터셋 추가하는 법
    • [ ] 이슈/PR/Discussion 간단한 설명
    • [ ] 추가로 같이 관리하고 싶은 분들에 대한 설명
    • [ ] 데이터셋 라이선스에 대한 문제에 대한 설명
    documentation before-release 
    opened by jeongukjae 1
  • 현재 wikitext의 문제점을 카탈로그에 적어두기

    현재 wikitext의 문제점을 카탈로그에 적어두기

    https://github.com/jeongukjae/tfds-korean/issues/12#issuecomment-826358469

    위와 같은 이유로 "필터를 해서 사용해라" 혹은 "중간에 빈 example이 있다" 정도는 적어두는 편이 좋은 듯

    documentation 
    opened by jeongukjae 0
  • [Dataset Request] sci-news-sum-kr-50

    [Dataset Request] sci-news-sum-kr-50

    Dataset Information

    • Dataset Name:
    • Prefered code name(e.g. korean_chatbot_qa_data): sci_news_sum_kr_50
    • Dataset description:
    • Homepage: https://github.com/theeluwin/sci-news-sum-kr-50
    • Citation:

    Additional Context

    dataset request 
    opened by jeongukjae 0
  • [Dataset Request] kowikitext

    [Dataset Request] kowikitext

    Dataset Information

    • Dataset Name: 한국어 wikitext
    • Prefered code name(e.g. korean_chatbot_qa_data): kowikitext
    • Dataset description: Wikitext format Korean corpus
    • Homepage: https://github.com/lovit/kowikitext
    • Citation:

    Additional Context

    이것도 #12 와 같은 문제점이 존재하는 것으로 보이는데, 일단은 Korpora 방식을 따라감. 이 데이터셋도 heading을 기준으로 split할 경우 = 분류~~~ =같은 행들이 존재하여 정확히 문서 단위로 복구가 불가능함.

    dataset request 
    opened by jeongukjae 0
  • [Dataset Request] korean_unsmile_dataset

    [Dataset Request] korean_unsmile_dataset

    Dataset Information

    • Dataset Name:
    • Prefered code name(e.g. korean_chatbot_qa_data):
    • Dataset description:
    • Homepage: https://github.com/smilegate-ai/korean_unsmile_dataset
    • Citation:
    • License:

    Additional Context

    dataset request 
    opened by jeongukjae 0
  • 데이터셋 카탈로그 빌더 특정 데이터셋 스킵가능하게 수정

    데이터셋 카탈로그 빌더 특정 데이터셋 스킵가능하게 수정

    현재 모든 데이터셋이 로컬에 존재해야 카탈로그를 빌드할 수 있는데, 이게 너무 부담이 된다. 현재 develop 기준만 해도 대략 30GB를 로컬에 들고 있어야 한다.

    데이터셋 버전이 바뀌지 않는다면 카탈로그를 다시 빌드해야하는 때는 build_catalog.py 스크립트가 변경될 때 뿐이라서 특정 데이터셋 페이지 & index 페이지만 빌드해도 되도록 수정해두자. 물론 전체 데이터셋에 대한 카탈로그 빌드도 가능하게 유지.

    documentation 
    opened by jeongukjae 0
  • [Dataset Request] Korean Single Speaker Speech Dataset

    [Dataset Request] Korean Single Speaker Speech Dataset

    Dataset Information

    • Dataset Name: Korean Single Speaker Speech Dataset
    • Prefered code name(e.g. korean_chatbot_qa_data):
    • Dataset description:
    • Homepage: https://www.kaggle.com/bryanpark/korean-single-speaker-speech-dataset
    • Citation:
    • License:

    Additional Context

    dataset request 
    opened by jeongukjae 0
  • [Dataset Request] 세종코퍼스

    [Dataset Request] 세종코퍼스

    Dataset Information

    • Dataset Name:
    • Prefered code name(e.g. korean_chatbot_qa_data): sejong_corpus
    • Dataset description:
    • Homepage: https://ithub.korean.go.kr/user/total/database/corpusManager.do
    • Citation:
    • License:

    Additional Context

    세종 코퍼스: https://ithub.korean.go.kr/user/total/database/corpusManager.do 세종 코퍼스 - 병렬: https://ithub.korean.go.kr/user/total/database/etcManager.do

    라이선스가 상업적 이용이 어렵더라도 이용하기에 좋은 말뭉치라 생각해서 일단은 추가하는 게 좋을 것 같아요.

    dataset request 
    opened by jeongukjae 0
  • [Dataset Request] kcbert

    [Dataset Request] kcbert

    Dataset Information

    • Dataset Name:
    • Prefered code name(e.g. korean_chatbot_qa_data): kcbert
    • Dataset description:
    • Homepage: https://github.com/Beomi/KcBERT
    • Citation:

    Additional Context

    이거 추가해두면 엄청 유용하게 쓸 수 있을 것 같다!!

    dataset request 
    opened by jeongukjae 4
  • [Dataset Request] KAIST Corpus

    [Dataset Request] KAIST Corpus

    Dataset Information

    • Dataset Name: kaist corpus
    • Prefered code name(e.g. korean_chatbot_qa_data): kaist_corpus
    • Dataset description:
    • Homepage: http://semanticweb.kaist.ac.kr/home/index.php/KAIST_Corpus
    • Citation:

    Additional Context

    wontfix dataset request 
    opened by jeongukjae 1
Releases(0.4.0)
  • 0.4.0(Sep 19, 2021)

    • Update KLUE dataset to 1.1.0 https://github.com/jeongukjae/tfds-korean/commit/e954ec4550ec5db015d3f93750e6763aca5a9b48
    • Reorder ClassLabel names of NLI datasets. https://github.com/jeongukjae/tfds-korean/commit/be3e8cba7b9d537969b9c08738dd6df36b0145bc
    Source code(tar.gz)
    Source code(zip)
  • 0.3.0(Jun 16, 2021)

    • add korean_wikipedia_corpus (https://jeongukjae.github.io/tfds-korean/datasets/korean_wikipedia_corpus.html)
    • add namuwiki_corpus (https://jeongukjae.github.io/tfds-korean/datasets/namuwiki_corpus.html)
    Source code(tar.gz)
    Source code(zip)
  • 0.2.0(Jun 6, 2021)

    • add KLUE benchmark datasets
    • update dataset catalog (https://github.com/jeongukjae/tfds-korean/commit/eb1c72d0a716aba7326276e77e8e6f94976bb579, https://github.com/jeongukjae/tfds-korean/commit/614616b82d0bbdaecbc4ec50e0cfc67b78b646c2)
    • fix klue_ner supervised key bug (https://github.com/jeongukjae/tfds-korean/commit/10f765f01b9f3952e298395779dcf8efeefde93a)
    Source code(tar.gz)
    Source code(zip)
  • 0.1.3(May 29, 2021)

  • 0.1.2(May 25, 2021)

  • 0.1.1(Apr 30, 2021)

  • 0.1.0(Apr 29, 2021)

    • Add kowikitext and namuwikitext dataset
    • Add missing licenses and bibtex.
    • Add license section in catalog page.
    • Add example links in catalog page.
    Source code(tar.gz)
    Source code(zip)
Owner
Jeong Ukjae
Machine Learning Engineer
Jeong Ukjae
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.

Tensor2Tensor Tensor2Tensor, or T2T for short, is a library of deep learning models and datasets designed to make deep learning more accessible and ac

12.9k Jan 07, 2023
Quantifiers and Negations in RE Documents

Quantifiers-and-Negations-in-RE-Documents This project was part of my work for a

Nicolas Ruscher 1 Feb 01, 2022
Python library for processing Chinese text

SnowNLP: Simplified Chinese Text Processing SnowNLP是一个python写的类库,可以方便的处理中文文本内容,是受到了TextBlob的启发而写的,由于现在大部分的自然语言处理库基本都是针对英文的,于是写了一个方便处理中文的类库,并且和TextBlob

Rui Wang 6k Jan 02, 2023
VMD Audio/Text control with natural language

This repository is a proof of principle for performing Molecular Dynamics analysis, in this case with the program VMD, via natural language commands.

Andrew White 13 Jun 09, 2022
Subtitle Workshop (subshop): tools to download and synchronize subtitles

SUBSHOP Tools to download, remove ads, and synchronize subtitles. SUBSHOP Purpose Limitations Required Web Credentials Installation, Configuration, an

Joe D 4 Feb 13, 2022
Repository for Graph2Pix: A Graph-Based Image to Image Translation Framework

Graph2Pix: A Graph-Based Image to Image Translation Framework Installation Install the dependencies in env.yml $ conda env create -f env.yml $ conda a

18 Nov 17, 2022
NLP Core Library and Model Zoo based on PaddlePaddle 2.0

PaddleNLP 2.0拥有丰富的模型库、简洁易用的API与高性能的分布式训练的能力,旨在为飞桨开发者提升文本建模效率,并提供基于PaddlePaddle 2.0的NLP领域最佳实践。

6.9k Jan 01, 2023
Translate U is capable of translating the text present in an image from one language to the other.

Translate U is capable of translating the text present in an image from one language to the other. The app uses OCR and Google translate to identify and translate across 80+ languages.

Neelanjan Manna 1 Dec 22, 2021
Malaya-Speech is a Speech-Toolkit library for bahasa Malaysia, powered by Deep Learning Tensorflow.

Malaya-Speech is a Speech-Toolkit library for bahasa Malaysia, powered by Deep Learning Tensorflow. Documentation Proper documentation is available at

HUSEIN ZOLKEPLI 151 Jan 05, 2023
Implementaion of our ACL 2022 paper Bridging the Data Gap between Training and Inference for Unsupervised Neural Machine Translation

Bridging the Data Gap between Training and Inference for Unsupervised Neural Machine Translation This is the implementaion of our paper: Bridging the

hezw.tkcw 20 Dec 12, 2022
Ukrainian TTS (text-to-speech) using Coqui TTS

title emoji colorFrom colorTo sdk app_file pinned Ukrainian TTS 🐸 green green gradio app.py false Ukrainian TTS 📢 🤖 Ukrainian TTS (text-to-speech)

Yurii Paniv 85 Dec 26, 2022
Higher quality textures for the Metal Gear Solid series.

Metal Gear Solid: HD Textures Higher quality textures for the Metal Gear Solid series. The goal is to maximize the quality of assets that the engine w

Samantha 6 Dec 06, 2022
This converter will create the exact measure for your cappuccino recipe from the grandiose Rafaella Ballerini!

About CappuccinoJs This converter will create the exact measure for your cappuccino recipe from the grandiose Rafaella Ballerini! Este conversor criar

Arthur Ottoni Ribeiro 48 Nov 15, 2022
TextFlint is a multilingual robustness evaluation platform for natural language processing tasks,

TextFlint is a multilingual robustness evaluation platform for natural language processing tasks, which unifies general text transformation, task-specific transformation, adversarial attack, sub-popu

TextFlint 587 Dec 20, 2022
NLP-based analysis of poor Chinese movie reviews on Douban

douban_embedding 豆瓣中文影评差评分析 1. NLP NLP(Natural Language Processing)是指自然语言处理,他的目的是让计算机可以听懂人话。 下面是我将2万条豆瓣影评训练之后,随意输入一段新影评交给神经网络,最终AI推断出的结果。 "很好,演技不错

3 Apr 15, 2022
Levenshtein and Hamming distance computation

distance - Utilities for comparing sequences This package provides helpers for computing similarities between arbitrary sequences. Included metrics ar

112 Dec 22, 2022
The official repository of the ISBI 2022 KNIGHT Challenge

KNIGHT The official repository holding the data for the ISBI 2022 KNIGHT Challenge About The KNIGHT Challenge asks teams to develop models to classify

Nicholas Heller 4 Jan 22, 2022
A CSRankings-like index for speech researchers

Speech Rankings This project mimics CSRankings to generate an ordered list of researchers in speech/spoken language processing along with their possib

Mutian He 19 Nov 26, 2022
An extensive UI tool built using new data scraped from BBC News

BBC-News-Analyzer An extensive UI tool built using new data scraped from BBC New

Antoreep Jana 1 Dec 31, 2021
Training and evaluation codes for the BertGen paper (ACL-IJCNLP 2021)

BERTGEN This repository is the implementation of the paper "BERTGEN: Multi-task Generation through BERT" (https://arxiv.org/abs/2106.03484). The codeb

<a href=[email protected]"> 9 Oct 26, 2022