英文

MARBERT 是我们 ACL 2021 论文中描述的三个模型之一。MARBERT 是一个大规模的预训练遮蔽语言模型,专注于方言阿拉伯语(DA)和MSA。阿拉伯语有多种变体。为了训练 MARBERT,我们从一个包含约60亿条推文的大型内部数据集中随机抽样了10亿条阿拉伯语推文。我们只包括至少有3个阿拉伯词的推文,根据字符匹配,无论推文是否有非阿拉伯字符串。也就是说,只要推文满足3个阿拉伯词的条件,我们就不会删除非阿拉伯字符串。该数据集占据了128GB的文本(15.6B个标记)。我们使用与 ARBERT(BERT-base)相同的网络架构,但去除了下一句预测(NSP)目标,因为推文很短。请参阅我们的文档以了解如何修改BERT代码以删除NSP。有关MARBERT的更多信息,请访问我们自己的GitHub。

BibTex

如果您在科学出版物中使用我们的模型(ARBERT,MARBERT或MARBERTv2),或者如果您发现此存储库中的资源有用,请引用我们的论文如下(待更新):

@inproceedings{abdul-mageed-etal-2021-arbert,
    title = "{ARBERT} {\&} {MARBERT}: Deep Bidirectional Transformers for {A}rabic",
    author = "Abdul-Mageed, Muhammad  and
      Elmadany, AbdelRahim  and
      Nagoudi, El Moatez Billah",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.acl-long.551",
    doi = "10.18653/v1/2021.acl-long.551",
    pages = "7088--7105",
    abstract = "Pre-trained language models (LMs) are currently integral to many natural language processing systems. Although multilingual LMs were also introduced to serve many languages, these have limitations such as being costly at inference time and the size and diversity of non-English data involved in their pre-training. We remedy these issues for a collection of diverse Arabic varieties by introducing two powerful deep bidirectional transformer-based models, ARBERT and MARBERT. To evaluate our models, we also introduce ARLUE, a new benchmark for multi-dialectal Arabic language understanding evaluation. ARLUE is built using 42 datasets targeting six different task clusters, allowing us to offer a series of standardized experiments under rich conditions. When fine-tuned on ARLUE, our models collectively achieve new state-of-the-art results across the majority of tasks (37 out of 48 classification tasks, on the 42 datasets). Our best model acquires the highest ARLUE score (77.40) across all six task clusters, outperforming all other models including XLM-R Large ( 3.4x larger size). Our models are publicly available at https://github.com/UBC-NLP/marbert and ARLUE will be released through the same repository.",
}

致谢

我们由衷感谢加拿大自然科学与工程研究理事会、加拿大社会科学与人文研究理事会、加拿大创新基金会、ComputeCanada和 UBC ARC-Sockeye 的支持。我们还感谢 Google TensorFlow Research Cloud (TFRC) 计划为我们提供免费的TPU访问权限。