英文

Tommaso Caselli Valerio Basile Jelena Mitrovic Michael Granizter

模型描述

HateBERT是一种英文预训练的BERT模型,通过将英文BERT基-无大小写模型进一步训练,使用了来自Reddit禁止社区的100多万个帖子。该模型是格罗宁根大学、都灵大学和帕绍大学共同开发的。

了解更多细节,请查阅在 WOAH 2021 上发表的论文。代码和已调优的模型可在 OSF 上获取。

BibTeX条目和引文信息

@inproceedings{caselli-etal-2021-hatebert,
    \ttitle = "{H}ate{BERT}: Retraining {BERT} for Abusive Language Detection in {E}nglish",
    \tauthor = "Caselli, Tommaso  and
      Basile, Valerio  and
      Mitrovi{\'c}, Jelena  and
      Granitzer, Michael",
    \tbooktitle = "Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)",
    \tmonth = aug,
    \tyear = "2021",
    \taddress = "Online",
    \tpublisher = "Association for Computational Linguistics",
    \tturl = "https://aclanthology.org/2021.woah-1.3",
    \tdoi = "10.18653/v1/2021.woah-1.3",
    \tpages = "17--25",
    \tabstract = "We introduce HateBERT, a re-trained BERT model for abusive language detection in English. The model was trained on RAL-E, a large-scale dataset of Reddit comments in English from communities banned for being offensive, abusive, or hateful that we have curated and made available to the public. We present the results of a detailed comparison between a general pre-trained language model and the retrained version on three English datasets for offensive, abusive language and hate speech detection tasks. In all datasets, HateBERT outperforms the corresponding general BERT model. We also discuss a battery of experiments comparing the portability of the fine-tuned models across the datasets, suggesting that portability is affected by compatibility of the annotated phenomena.",
}