预训练的阿拉伯BERT Mini语言模型
如果您在工作中使用此模型,请引用本论文:
@inproceedings{safaya-etal-2020-kuisail, title = "{KUISAIL} at {S}em{E}val-2020 Task 12: {BERT}-{CNN} for Offensive Speech Identification in Social Media", author = "Safaya, Ali and Abdullatif, Moutasem and Yuret, Deniz", booktitle = "Proceedings of the Fourteenth Workshop on Semantic Evaluation", month = dec, year = "2020", address = "Barcelona (online)", publisher = "International Committee for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.semeval-1.271", pages = "2054--2059", }
arabic-bert-mini模型预训练了大约82亿个词:
和其他阿拉伯资源,总共约95GB的文本。
关于训练数据的注释:
您可以通过安装torch或tensorflow和Huggingface库transformers来使用此模型。您可以像这样直接初始化它:
from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("asafaya/bert-mini-arabic") model = AutoModelForMaskedLM.from_pretrained("asafaya/bert-mini-arabic")
有关模型性能或其他任何查询的更多详细信息,请参阅 Arabic-BERT
感谢Google免费提供TPU进行训练,并感谢Huggingface在其服务器上托管此模型 ?