数据集:
wmt15
警告:Common Crawl语料库数据存在问题( training-parallel-commoncrawl.tgz ):
我们已经联系了WMT组织者。
基于statmt.org的数据的翻译数据集。
存在不同年份的版本,使用了多种数据源的组合。基本的wmt允许您通过选择自己的数据/语言对来创建自定义数据集。操作如下:
from datasets import inspect_dataset, load_dataset_builder inspect_dataset("wmt15", "path/to/scripts") builder = load_dataset_builder( "path/to/scripts/wmt_utils.py", language_pair=("fr", "de"), subsets={ datasets.Split.TRAIN: ["commoncrawl_frde"], datasets.Split.VALIDATION: ["euelections_dev2019"], }, ) # Standard version builder.download_and_prepare() ds = builder.as_dataset() # Streamable version ds = builder.as_streaming_dataset()
“验证”示例如下。
所有拆分中的数据字段是相同的。
cs-enname | train | validation | test |
---|---|---|---|
cs-en | 959768 | 3003 | 2656 |
@InProceedings{bojar-EtAl:2015:WMT, author = {Bojar, Ond {r}ej and Chatterjee, Rajen and Federmann, Christian and Haddow, Barry and Huck, Matthias and Hokamp, Chris and Koehn, Philipp and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Post, Matt and Scarton, Carolina and Specia, Lucia and Turchi, Marco}, title = {Findings of the 2015 Workshop on Statistical Machine Translation}, booktitle = {Proceedings of the Tenth Workshop on Statistical Machine Translation}, month = {September}, year = {2015}, address = {Lisbon, Portugal}, publisher = {Association for Computational Linguistics}, pages = {1--46}, url = {http://aclweb.org/anthology/W15-3001} }
感谢 @thomwolf 和 @patrickvonplaten 添加此数据集。