数据集:
wmt16
警告:Common Crawl语料库数据存在问题( training-parallel-commoncrawl.tgz ):
我们已经联系了WMT组织者。
基于statmt.org的数据的翻译数据集。
不同年份的版本使用多种数据源。基础wmt允许您通过选择自己的数据/语言对来创建自定义数据集。可以按照以下步骤完成:
from datasets import inspect_dataset, load_dataset_builder inspect_dataset("wmt16", "path/to/scripts") builder = load_dataset_builder( "path/to/scripts/wmt_utils.py", language_pair=("fr", "de"), subsets={ datasets.Split.TRAIN: ["commoncrawl_frde"], datasets.Split.VALIDATION: ["euelections_dev2019"], }, ) # Standard version builder.download_and_prepare() ds = builder.as_dataset() # Streamable version ds = builder.as_streaming_dataset()
“验证”示例如下所示。
所有拆分之间的数据字段相同。
cs-enname | train | validation | test |
---|---|---|---|
cs-en | 997240 | 2656 | 2999 |
@InProceedings{bojar-EtAl:2016:WMT1, author = {Bojar, Ond {r}ej and Chatterjee, Rajen and Federmann, Christian and Graham, Yvette and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Koehn, Philipp and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Neveol, Aurelie and Neves, Mariana and Popel, Martin and Post, Matt and Rubino, Raphael and Scarton, Carolina and Specia, Lucia and Turchi, Marco and Verspoor, Karin and Zampieri, Marcos}, title = {Findings of the 2016 Conference on Machine Translation}, booktitle = {Proceedings of the First Conference on Machine Translation}, month = {August}, year = {2016}, address = {Berlin, Germany}, publisher = {Association for Computational Linguistics}, pages = {131--198}, url = {http://www.aclweb.org/anthology/W/W16/W16-2301} }
感谢 @thomwolf 和 @patrickvonplaten 添加了这个数据集。