A collection of Korean Text Datasets ready to use using Tensorflow-Datasets.

Overview

tfds-korean

A collection of Korean Text Datasets ready to use using Tensorflow-Datasets.

TensorFlow-Datasets를 이용한 한국어/한글 데이터셋 모음입니다.

Dataset Catalog | pypi

PyPI - License PyPI Test Python

Usage

Installation

pip install tfds-korean

Loading dataset

import tensorflow_datasets as tfds
import tfds_korean.nsmc # register nsmc dataset

ds = tfds.load('nsmc')

train_ds = ds['train'].batch(32)
test_ds = ds['test'].batch(128)

# define model
# ....
# ....

model.fit(train_ds)
model.evaluate(test_ds)

See Dataset Catalog page for dataset list and details of each dataset.

Examples

Licenses

The license for this repository and licenses for datasets are applied separately. It is recommended to use each dataset after checking the dataset's website.

본 레포지토리의 라이선스와 데이터셋의 라이선스는 별도로 적용됩니다. 데이터셋을 사용하기 전 각 데이터셋의 라이선스와 웹 사이트를 확인 후 사용하시길 권해드리며, 본 라이브러리는 데이터셋을 호스팅하거나 배포하지 않는 점을 참고부탁드립니다.

Comments
  • [Dataset Request] sae4k

    [Dataset Request] sae4k

    Dataset Information

    • Dataset Name:
    • Prefered code name(e.g. korean_chatbot_qa_data): sae4k
    • Dataset description:
    • Homepage: https://github.com/warnikchow/sae4k
    • Citation:

    Additional Context

    dataset request 
    opened by jeongukjae 2
  • [Dataset Request] namuwiki corpus

    [Dataset Request] namuwiki corpus

    Dataset Information

    • Dataset Name: namuwiki corpus
    • Prefered code name(e.g. korean_chatbot_qa_data):
    • Dataset description:
    • Homepage: https://github.com/jeongukjae/namuwiki-corpus
    • Citation:
    • License:

    Additional Context

    문장 단위 분절해놓은 나무위키 코퍼스

    dataset request 
    opened by jeongukjae 1
  • [Dataset Request] korean wikipedia corpus

    [Dataset Request] korean wikipedia corpus

    Dataset Information

    • Dataset Name: 한국어 위키피디아 코퍼스
    • Prefered code name(e.g. korean_chatbot_qa_data): korean_wikipedia_corpus
    • Dataset description:
    • Homepage: https://github.com/jeongukjae/korean-wikipedia-corpus
    • Citation:
    • License:

    Additional Context

    kowikitext도 충분히 좋지만, 문장단위로 사용할 때 불편한 점이 있다. 그래서 문장단위로 이미 나누어진 말뭉치를 한국어 위키피디아 덤프에서 하나 생성. (kss로 분절)

    FeaturesDict({
        'content': Sequence(Text(shape=(), dtype=tf.string)),
        'title': Text(shape=(), dtype=tf.string),
    })
    

    요런식으로 content가 TensorSpec(shape=[None], dtype=tf.string)인 텐서값을 가지도록 만들어주면 distillation이나 문장 단위 unsupervised learning이나 할 때 편할 것 같아요.

    dataset request before-release 
    opened by jeongukjae 1
  • [Dataset Request] KLUE

    [Dataset Request] KLUE

    Dataset Information

    • Dataset Name: KLUE
    • Prefered code name(e.g. korean_chatbot_qa_data): klue_dp, klue_mrc, ...
    • Dataset description:
    • Homepage:
    • Citation:
    • License:

    Additional Context

    https://github.com/KLUE-benchmark/KLUE https://arxiv.org/pdf/2105.09680v1.pdf

    • [x] dp @jeongukjae
    • [x] mrc @harrydrippin
    • [x] ner @jeongukjae
    • [x] nli @jeongukjae
    • [x] re @jeongukjae
    • [x] sts @jeongukjae
    • [x] wos @jeongukjae
    • [x] ynat @jeongukjae
    dataset request before-release 
    opened by jeongukjae 1
  • [Dataset Request] namuwikitext

    [Dataset Request] namuwikitext

    Dataset Information

    • Dataset Name: Wikitext format dataset of Namuwiki
    • Prefered code name(e.g. korean_chatbot_qa_data): namuwikitext
    • Dataset description: 나무위키의 덤프 데이터를 바탕을 제작한 wikitext 형식의 텍스트 파일입니다. 학습 및 평가를 위하여 위키페이지 별로 train (99%), dev (0.5%), test (0.5%) 로 나뉘어져있습니다.
    • Homepage: https://github.com/lovit/namuwikitext
    • Citation:

    Additional Context

    https://github.com/lovit/namuwikitext/issues/10

    README에 있는 데이터셋 개수와 맞지 않아 이렇게 이슈 작성을 해놓았는데, 답변은 없는 상황임. 일단 Korpora에 있는 대로 추가해놓고 나중에 다시 수정하는 것이 좋지 않을까

    dataset request 
    opened by jeongukjae 1
  • [Dataset Request] KorQuAD

    [Dataset Request] KorQuAD

    Dataset Information

    • Dataset Name: KorQuAD 1.0
    • Prefered code name(e.g. korean_chatbot_qa_data): korquad_10
    • Dataset description: KorQuAD 1.0은 한국어 Machine Reading Comprehension을 위해 만든 데이터셋입니다. 모든 질의에 대한 답변은 해당 Wikipedia article 문단의 일부 하위 영역으로 이루어집니다. Stanford Question Answering Dataset(SQuAD) v1.0과 동일한 방식으로 구성되었습니다.
    • Homepage: https://korquad.github.io/KorQuad%201.0/
    • Citation:

    Dataset Information

    • Dataset Name: KorQuAD 2.0
    • Prefered code name(e.g. korean_chatbot_qa_data): korquad_20
    • Dataset description: KorQuAD 2.0은 KorQuAD 1.0에서 질문답변 20,000+ 쌍을 포함하여 총 100,000+ 쌍으로 구성된 한국어 Machine Reading Comprehension 데이터셋 입니다. KorQuAD 1.0과는 다르게 1~2 문단이 아닌 Wikipedia article 전체에서 답을 찾아야 합니다. 매우 긴 문서들이 있기 때문에 탐색 시간에 대한 고려가 필요할 것 입니다. 또한 표와 리스트도 포함되어 있기 때문에 HTML tag를 통한 문서의 구조 이해도 필요합니다. 이 데이터셋을 통해서 다양한 형태와 길이의 문서들에서도 기계독해가 가능해질 것 입니다.
    • Homepage: https://korquad.github.io
    • Citation:

    Additional Context

    일단은 KorQuAD 1.0만 추가해놓고 2.0은 추후에 추가해도 무방할 듯

    dataset request before-release 
    opened by jeongukjae 1
  • [Dataset Request] 한국해양대학교 NER 데이터셋

    [Dataset Request] 한국해양대학교 NER 데이터셋

    Dataset Information

    • Dataset Name: 한국해양대학교 자연언어처리 연구실 NER 데이터셋
    • Prefered code name(e.g. korean_chatbot_qa_data): kmounlp_ner
    • Dataset description: 한국어 개체명 정의 및 표지 표준화 기술보고서와 이를 기반으로 제작된 개체명 형태소 말뭉치
    • Homepage: https://github.com/kmounlp/NER
    • Citation:

    Additional Context

    보고서: https://github.com/kmounlp/NER/blob/master/NER%20Guideline%20(ver%201.0).pdf

    dataset request 
    opened by jeongukjae 1
  • Add CONTRIBUTING.md

    Add CONTRIBUTING.md

    • [ ] 프로젝트에서 사용하는 언어에 대한 설명. 사용법/데이터셋 설명은 가능하면 영어로 적되, 이슈/PR 소통은 한국어로 하는게 좋지 않을까?
    • [ ] 데이터셋 추가하는 법
    • [ ] 이슈/PR/Discussion 간단한 설명
    • [ ] 추가로 같이 관리하고 싶은 분들에 대한 설명
    • [ ] 데이터셋 라이선스에 대한 문제에 대한 설명
    documentation before-release 
    opened by jeongukjae 1
  • 현재 wikitext의 문제점을 카탈로그에 적어두기

    현재 wikitext의 문제점을 카탈로그에 적어두기

    https://github.com/jeongukjae/tfds-korean/issues/12#issuecomment-826358469

    위와 같은 이유로 "필터를 해서 사용해라" 혹은 "중간에 빈 example이 있다" 정도는 적어두는 편이 좋은 듯

    documentation 
    opened by jeongukjae 0
  • [Dataset Request] sci-news-sum-kr-50

    [Dataset Request] sci-news-sum-kr-50

    Dataset Information

    • Dataset Name:
    • Prefered code name(e.g. korean_chatbot_qa_data): sci_news_sum_kr_50
    • Dataset description:
    • Homepage: https://github.com/theeluwin/sci-news-sum-kr-50
    • Citation:

    Additional Context

    dataset request 
    opened by jeongukjae 0
  • [Dataset Request] kowikitext

    [Dataset Request] kowikitext

    Dataset Information

    • Dataset Name: 한국어 wikitext
    • Prefered code name(e.g. korean_chatbot_qa_data): kowikitext
    • Dataset description: Wikitext format Korean corpus
    • Homepage: https://github.com/lovit/kowikitext
    • Citation:

    Additional Context

    이것도 #12 와 같은 문제점이 존재하는 것으로 보이는데, 일단은 Korpora 방식을 따라감. 이 데이터셋도 heading을 기준으로 split할 경우 = 분류~~~ =같은 행들이 존재하여 정확히 문서 단위로 복구가 불가능함.

    dataset request 
    opened by jeongukjae 0
  • [Dataset Request] korean_unsmile_dataset

    [Dataset Request] korean_unsmile_dataset

    Dataset Information

    • Dataset Name:
    • Prefered code name(e.g. korean_chatbot_qa_data):
    • Dataset description:
    • Homepage: https://github.com/smilegate-ai/korean_unsmile_dataset
    • Citation:
    • License:

    Additional Context

    dataset request 
    opened by jeongukjae 0
  • 데이터셋 카탈로그 빌더 특정 데이터셋 스킵가능하게 수정

    데이터셋 카탈로그 빌더 특정 데이터셋 스킵가능하게 수정

    현재 모든 데이터셋이 로컬에 존재해야 카탈로그를 빌드할 수 있는데, 이게 너무 부담이 된다. 현재 develop 기준만 해도 대략 30GB를 로컬에 들고 있어야 한다.

    데이터셋 버전이 바뀌지 않는다면 카탈로그를 다시 빌드해야하는 때는 build_catalog.py 스크립트가 변경될 때 뿐이라서 특정 데이터셋 페이지 & index 페이지만 빌드해도 되도록 수정해두자. 물론 전체 데이터셋에 대한 카탈로그 빌드도 가능하게 유지.

    documentation 
    opened by jeongukjae 0
  • [Dataset Request] Korean Single Speaker Speech Dataset

    [Dataset Request] Korean Single Speaker Speech Dataset

    Dataset Information

    • Dataset Name: Korean Single Speaker Speech Dataset
    • Prefered code name(e.g. korean_chatbot_qa_data):
    • Dataset description:
    • Homepage: https://www.kaggle.com/bryanpark/korean-single-speaker-speech-dataset
    • Citation:
    • License:

    Additional Context

    dataset request 
    opened by jeongukjae 0
  • [Dataset Request] 세종코퍼스

    [Dataset Request] 세종코퍼스

    Dataset Information

    • Dataset Name:
    • Prefered code name(e.g. korean_chatbot_qa_data): sejong_corpus
    • Dataset description:
    • Homepage: https://ithub.korean.go.kr/user/total/database/corpusManager.do
    • Citation:
    • License:

    Additional Context

    세종 코퍼스: https://ithub.korean.go.kr/user/total/database/corpusManager.do 세종 코퍼스 - 병렬: https://ithub.korean.go.kr/user/total/database/etcManager.do

    라이선스가 상업적 이용이 어렵더라도 이용하기에 좋은 말뭉치라 생각해서 일단은 추가하는 게 좋을 것 같아요.

    dataset request 
    opened by jeongukjae 0
  • [Dataset Request] kcbert

    [Dataset Request] kcbert

    Dataset Information

    • Dataset Name:
    • Prefered code name(e.g. korean_chatbot_qa_data): kcbert
    • Dataset description:
    • Homepage: https://github.com/Beomi/KcBERT
    • Citation:

    Additional Context

    이거 추가해두면 엄청 유용하게 쓸 수 있을 것 같다!!

    dataset request 
    opened by jeongukjae 4
  • [Dataset Request] KAIST Corpus

    [Dataset Request] KAIST Corpus

    Dataset Information

    • Dataset Name: kaist corpus
    • Prefered code name(e.g. korean_chatbot_qa_data): kaist_corpus
    • Dataset description:
    • Homepage: http://semanticweb.kaist.ac.kr/home/index.php/KAIST_Corpus
    • Citation:

    Additional Context

    wontfix dataset request 
    opened by jeongukjae 1
Releases(0.4.0)
  • 0.4.0(Sep 19, 2021)

    • Update KLUE dataset to 1.1.0 https://github.com/jeongukjae/tfds-korean/commit/e954ec4550ec5db015d3f93750e6763aca5a9b48
    • Reorder ClassLabel names of NLI datasets. https://github.com/jeongukjae/tfds-korean/commit/be3e8cba7b9d537969b9c08738dd6df36b0145bc
    Source code(tar.gz)
    Source code(zip)
  • 0.3.0(Jun 16, 2021)

    • add korean_wikipedia_corpus (https://jeongukjae.github.io/tfds-korean/datasets/korean_wikipedia_corpus.html)
    • add namuwiki_corpus (https://jeongukjae.github.io/tfds-korean/datasets/namuwiki_corpus.html)
    Source code(tar.gz)
    Source code(zip)
  • 0.2.0(Jun 6, 2021)

    • add KLUE benchmark datasets
    • update dataset catalog (https://github.com/jeongukjae/tfds-korean/commit/eb1c72d0a716aba7326276e77e8e6f94976bb579, https://github.com/jeongukjae/tfds-korean/commit/614616b82d0bbdaecbc4ec50e0cfc67b78b646c2)
    • fix klue_ner supervised key bug (https://github.com/jeongukjae/tfds-korean/commit/10f765f01b9f3952e298395779dcf8efeefde93a)
    Source code(tar.gz)
    Source code(zip)
  • 0.1.3(May 29, 2021)

  • 0.1.2(May 25, 2021)

  • 0.1.1(Apr 30, 2021)

  • 0.1.0(Apr 29, 2021)

    • Add kowikitext and namuwikitext dataset
    • Add missing licenses and bibtex.
    • Add license section in catalog page.
    • Add example links in catalog page.
    Source code(tar.gz)
    Source code(zip)
Owner
Jeong Ukjae
Machine Learning Engineer
Jeong Ukjae
Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them"

Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them"

Udit Arora 19 Oct 28, 2022
Code for "Semantic Role Labeling as Dependency Parsing: Exploring Latent Tree Structures Inside Arguments".

Code for "Semantic Role Labeling as Dependency Parsing: Exploring Latent Tree Structures Inside Arguments".

Yu Zhang 50 Nov 08, 2022
Yet Another Sequence Encoder - Encode sequences to vector of vector in python !

Yase Yet Another Sequence Encoder - encode sequences to vector of vectors in python ! Why Yase ? Yase enable you to encode any sequence which can be r

Pierre PACI 12 Aug 19, 2021
MicBot - MicBot uses Google Translate to speak everyone's chat messages

MicBot MicBot uses Google Translate to speak everyone's chat messages. It can al

2 Mar 09, 2022
NLP Text Classification

多标签文本分类任务 近年来随着深度学习的发展,模型参数的数量飞速增长。为了训练这些参数,需要更大的数据集来避免过拟合。然而,对于大部分NLP任务来说,构建大规模的标注数据集非常困难(成本过高),特别是对于句法和语义相关的任务。相比之下,大规模的未标注语料库的构建则相对容易。为了利用这些数据,我们可以

Jason 1 Nov 11, 2021
Yodatranslator is a simple translator English to Yoda-language

yodatranslator Overview yodatranslator is a simple translator English to Yoda-language. Project is created for educational purposes. It is intended to

1 Nov 11, 2021
This script just scrapes the most recent Nepali news from Kathmandu Post and notifies the user about current events at regular intervals.It sends out the most recent news at random!

Nepali-news-notifier This script just scrapes the most recent Nepali news from Kathmandu Post and notifies the user about current events at regular in

Sachit Yadav 1 Feb 11, 2022
Wrapper to display a script output or a text file content on the desktop in sway or other wlroots-based compositors

nwg-wrapper This program is a part of the nwg-shell project. This program is a GTK3-based wrapper to display a script output, or a text file content o

Piotr Miller 94 Dec 27, 2022
The model is designed to train a single and large neural network in order to predict correct translation by reading the given sentence.

Neural Machine Translation communication system The model is basically direct to convert one source language to another targeted language using encode

Nishant Banjade 7 Sep 22, 2022
Chinese NER(Named Entity Recognition) using BERT(Softmax, CRF, Span)

Chinese NER(Named Entity Recognition) using BERT(Softmax, CRF, Span)

Weitang Liu 1.6k Jan 03, 2023
Research code for "What to Pre-Train on? Efficient Intermediate Task Selection", EMNLP 2021

efficient-task-transfer This repository contains code for the experiments in our paper "What to Pre-Train on? Efficient Intermediate Task Selection".

AdapterHub 26 Dec 24, 2022
A text file containing 479k English words for all your dictionary/word-based projects e.g: auto-completion / autosuggestion

List Of English Words A text file containing over 466k English words. While searching for a list of english words (for an auto-complete tutorial) I fo

dwyl 8.5k Jan 03, 2023
A toolkit for document-level event extraction, containing some SOTA model implementations

Document-level Event Extraction via Heterogeneous Graph-based Interaction Model with a Tracker Source code for ACL-IJCNLP 2021 Long paper: Document-le

84 Dec 15, 2022
KoBERT - Korean BERT pre-trained cased (KoBERT)

KoBERT KoBERT Korean BERT pre-trained cased (KoBERT) Why'?' Training Environment Requirements How to install How to use Using with PyTorch Using with

SK T-Brain 1k Jan 02, 2023
customer care chatbot made with Rasa Open Source.

Customer Care Bot Customer care bot for ecomm company which can solve faq and chitchat with users, can contact directly to team. 🛠 Features Basic E-c

Dishant Gandhi 23 Oct 27, 2022
Meta learning algorithms to train cross-lingual NLI (multi-task) models

Meta learning algorithms to train cross-lingual NLI (multi-task) models

M.Hassan Mojab 4 Nov 20, 2022
[Preprint] Escaping the Big Data Paradigm with Compact Transformers, 2021

Compact Transformers Preprint Link: Escaping the Big Data Paradigm with Compact Transformers By Ali Hassani[1]*, Steven Walton[1]*, Nikhil Shah[1], Ab

SHI Lab 367 Dec 31, 2022
A multi-lingual approach to AllenNLP CoReference Resolution along with a wrapper for spaCy.

Crosslingual Coreference Coreference is amazing but the data required for training a model is very scarce. In our case, the available training for non

Pandora Intelligence 71 Jan 04, 2023
Lyrics generation with GPT2-based Transformer

HuggingArtists - Train a model to generate lyrics Create AI-Artist in just 5 minutes! 🚀 Run the demo notebook to train 🚀 Run the GUI demo to test Di

Aleksey Korshuk 65 Dec 19, 2022
This is a GUI program that will generate a word search puzzle image

Word Search Puzzle Generator Table of Contents About The Project Built With Getting Started Prerequisites Installation Usage Roadmap Contributing Cont

11 Feb 22, 2022