模型:

racai/distilbert-base-romanian-uncased

英文

罗马尼亚DistilBERT

该存储库包含不区分大小写的罗马尼亚DistilBERT(在论文中称为Distil-RoBERTa-base)。用于蒸馏的教师模型是: readerbench/RoBERT-base

该模型是在 this paper 中介绍的。相邻的代码可以在 here 中找到。

使用

from transformers import AutoTokenizer, AutoModel

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained("racai/distilbert-base-romanian-uncased")
model = AutoModel.from_pretrained("racai/distilbert-base-romanian-uncased")

# tokenize a test sentence
input_ids = tokenizer.encode("aceasta este o propoziție de test.", add_special_tokens=True, return_tensors="pt")

# run the tokens trough the model
outputs = model(input_ids)

print(outputs)

模型大小

比其教师模型RoBERTa-base小35%。

Model Size (MB) Params (Millions)
RoBERT-base 441 114
distilbert-base-romanian-cased 282 72

评估

我们针对5个罗马尼亚任务对模型进行了评估,与RoBERTa-base进行比较:

  • UPOS:通用词性(F1-macro)
  • XPOS:扩展词性(F1-macro)
  • NER:命名实体识别(F1-macro)
  • SAPN:情感分析-正向 vs 负向(准确性)
  • SAR:情感分析-评级(F1-macro)
  • DI:方言识别(F1-macro)
  • STS:语义文本相似度(皮尔逊相关系数)
Model UPOS XPOS NER SAPN SAR DI STS
RoBERT-base 98.02 97.15 85.14 98.30 79.40 96.07 81.18
distilbert-base-romanian-uncased 97.12 95.79 83.11 98.01 79.58 96.11 79.80

BibTeX条目和引用信息

@article{avram2021distilling,
  title={Distilling the Knowledge of Romanian BERTs Using Multiple Teachers},
  author={Andrei-Marius Avram and Darius Catrina and Dumitru-Clementin Cercel and Mihai Dascălu and Traian Rebedea and Vasile Păiş and Dan Tufiş},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.12650}
}