模型:
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26
CAMeLBERT-Mix DID Madar Corpus26 模型是一个方言识别(DID)模型,通过对 CAMeLBERT-Mix 模型进行微调得到。我们使用 MADAR Corpus 26 数据集进行了微调,该数据集包含 26 个标签。我们的微调过程和使用的超参数可以在我们的论文 " The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models " 中找到。可以在 here 处找到我们的微调代码。
您可以将 CAMeLBERT-Mix DID Madar Corpus26 模型用作 transformers pipeline 的一部分。该模型也将很快可在 CAMeL Tools 处获得。
要在 transformers pipeline 中使用模型:
>>> from transformers import pipeline >>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar26') >>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟'] >>> did(sentences) [{'label': 'CAI', 'score': 0.8751305937767029}, {'label': 'DOH', 'score': 0.9867215156555176}]
注意:要下载我们的模型,您需要 transformers>=3.5.0。否则,您可以手动下载模型。
@inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", }