数据集:

masakhane/mafand

英文

MAFAND 数据集卡片

数据集概述

MAFAND-MT 是面向非洲语言的新闻领域的最大机器翻译基准,涵盖了21种语言。

支持的任务和排行榜

机器翻译

语言

所涵盖的语言包括:

  • 阿姆哈拉语
  • 班巴拉语
  • 格马拉语
  • 埃维语
  • 丰语
  • 豪萨语
  • 伊博语
  • 基尼亚鲁旺达语
  • 卢干达语
  • 罗语
  • 莫西语
  • 尼日利亚皮钦语
  • 齐切瓦语
  • 绍纳语
  • 斯瓦希里语
  • 塞茨瓦纳语
  • 图威语
  • 乌鲁夫语
  • 科萨语
  • 约鲁巴语
  • 祖鲁语

数据集结构

数据实例

>>> from datasets import load_dataset
>>> data = load_dataset('masakhane/mafand', 'en-yor')

{"translation": {"src": "President Buhari will determine when to lift lockdown – Minister", "tgt": "Ààrẹ Buhari ló lè yóhùn padà lórí ètò kónílégbélé – Mínísítà"}}


{"translation": {"en": "President Buhari will determine when to lift lockdown – Minister", "yo": "Ààrẹ Buhari ló lè yóhùn padà lórí ètò kónílégbélé – Mínísítà"}}

数据字段

  • "translation": 任务名称
  • "src": 源语言,例如en
  • "tgt": 目标语言,例如yo

数据分割

训练/开发/测试集分割

language Train Dev Test
amh - 899 1037
bam 3302 1484 1600
bbj 2232 1133 1430
ewe 2026 1414 1563
fon 2637 1227 1579
hau 5865 1300 1500
ibo 6998 1500 1500
kin - 460 1006
lug 4075 1500 1500
luo 4262 1500 1500
mos 2287 1478 1574
nya - 483 1004
pcm 4790 1484 1574
sna - 556 1005
swa 30782 1791 1835
tsn 2100 1340 1835
twi 3337 1284 1500
wol 3360 1506 1500
xho - 486 1002
yor 6644 1544 1558
zul 3500 1239 998

数据集创建

策划理由

MAFAND 是从新闻领域创建的,从英语或法语翻译成非洲语言

源数据

初始数据收集和规范化

[需要更多信息]

谁是源语言的制作人?

注释

注释过程

[需要更多信息]

谁是注释者?

Masakhane 成员

个人和敏感信息

[需要更多信息]

使用数据的注意事项

数据的社会影响

[需要更多信息]

偏见讨论

[需要更多信息]

其他已知限制

[需要更多信息]

附加信息

数据集策划者

[需要更多信息]

许可信息

CC-BY-4.0-NC

引用信息

@inproceedings{adelani-etal-2022-thousand,
    title = "A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation",
    author = "Adelani, David  and
      Alabi, Jesujoba  and
      Fan, Angela  and
      Kreutzer, Julia  and
      Shen, Xiaoyu  and
      Reid, Machel  and
      Ruiter, Dana  and
      Klakow, Dietrich  and
      Nabende, Peter  and
      Chang, Ernie  and
      Gwadabe, Tajuddeen  and
      Sackey, Freshia  and
      Dossou, Bonaventure F. P.  and
      Emezue, Chris  and
      Leong, Colin  and
      Beukman, Michael  and
      Muhammad, Shamsuddeen  and
      Jarso, Guyo  and
      Yousuf, Oreen  and
      Niyongabo Rubungo, Andre  and
      Hacheme, Gilles  and
      Wairagala, Eric Peter  and
      Nasir, Muhammad Umair  and
      Ajibade, Benjamin  and
      Ajayi, Tunde  and
      Gitau, Yvonne  and
      Abbott, Jade  and
      Ahmed, Mohamed  and
      Ochieng, Millicent  and
      Aremu, Anuoluwapo  and
      Ogayo, Perez  and
      Mukiibi, Jonathan  and
      Ouoba Kabore, Fatoumata  and
      Kalipe, Godson  and
      Mbaye, Derguene  and
      Tapo, Allahsera Auguste  and
      Memdjokam Koagne, Victoire  and
      Munkoh-Buabeng, Edwin  and
      Wagner, Valencia  and
      Abdulmumin, Idris  and
      Awokoya, Ayodele  and
      Buzaaba, Happy  and
      Sibanda, Blessing  and
      Bukula, Andiswa  and
      Manthalu, Sam",
    booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
    month = jul,
    year = "2022",
    address = "Seattle, United States",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.naacl-main.223",
    doi = "10.18653/v1/2022.naacl-main.223",
    pages = "3053--3070",
    abstract = "Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.",
}