模型:
CAMeL-Lab/bert-base-arabic-camelbert-msa
CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants. We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three. We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth). The details are described in the paper " The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models ."
This model card describes CAMeLBERT-MSA ( bert-base-arabic-camelbert-msa ), a model pre-trained on the entire MSA dataset.
Model | Variant | Size | #Word | |
---|---|---|---|---|
bert-base-arabic-camelbert-mix | CA,DA,MSA | 167GB | 17.3B | |
bert-base-arabic-camelbert-ca | CA | 6GB | 847M | |
bert-base-arabic-camelbert-da | DA | 54GB | 5.8B | |
✔ | bert-base-arabic-camelbert-msa | MSA | 107GB | 12.6B |
bert-base-arabic-camelbert-msa-half | MSA | 53GB | 6.3B | |
bert-base-arabic-camelbert-msa-quarter | MSA | 27GB | 3.1B | |
bert-base-arabic-camelbert-msa-eighth | MSA | 14GB | 1.6B | |
bert-base-arabic-camelbert-msa-sixteenth | MSA | 6GB | 746M |
You can use the released model for either masked language modeling or next sentence prediction. However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. We release our fine-tuninig code here .
How to useYou can use this model directly with a pipeline for masked language modeling:
>>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-msa') >>> unmasker("الهدف من الحياة هو [MASK] .") [{'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]', 'score': 0.08507660031318665, 'token': 2854, 'token_str': 'العمل'}, {'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]', 'score': 0.058905381709337234, 'token': 3696, 'token_str': 'الحياة'}, {'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]', 'score': 0.04660581797361374, 'token': 6232, 'token_str': 'النجاح'}, {'sequence': '[CLS] الهدف من الحياة هو الربح. [SEP]', 'score': 0.04156001657247543, 'token': 12413, 'token_str': 'الربح'}, {'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]', 'score': 0.03534102067351341, 'token': 3088, 'token_str': 'الحب'}]
Note : to download our models, you would need transformers>=3.5.0 . Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa') model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input)
and in TensorFlow:
from transformers import AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa') model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input)
We use the original implementation released by Google for pre-training. We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
---|---|---|---|---|---|---|---|---|---|---|
NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% | |
Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | |
SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% | |
SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% | |
DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% | |
MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% | |
NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% | |
Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | |
---|---|---|---|---|---|---|---|---|---|
Variant-wise-average [1] | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% | |
CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | |
Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
[1] : Variant-wise-average refers to average over a group of tasks in the same language variant.
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
@inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", }