Pretrained BERT Mini language model for Arabic
If you use this model in your work, please cite this paper:
@inproceedings{safaya-etal-2020-kuisail, title = "{KUISAIL} at {S}em{E}val-2020 Task 12: {BERT}-{CNN} for Offensive Speech Identification in Social Media", author = "Safaya, Ali and Abdullatif, Moutasem and Yuret, Deniz", booktitle = "Proceedings of the Fourteenth Workshop on Semantic Evaluation", month = dec, year = "2020", address = "Barcelona (online)", publisher = "International Committee for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.semeval-1.271", pages = "2054--2059", }
arabic-bert-mini model was pretrained on ~8.2 Billion words:
and other Arabic resources which sum up to ~95GB of text.
Notes on training data:
You can use this model by installing torch or tensorflow and Huggingface library transformers . And you can use it directly by initializing it like this:
from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("asafaya/bert-mini-arabic") model = AutoModelForMaskedLM.from_pretrained("asafaya/bert-mini-arabic")
For further details on the models performance or any other queries, please refer to Arabic-BERT
Thanks to Google for providing free TPU for the training process and for Huggingface for hosting this model on their servers ?