模型:

pysentimiento/robertuito-irony

英文

西班牙语的讽刺检测

robertuito-irony

仓库: https://github.com/pysentimiento/pysentimiento/

使用IRosVA 2019数据集对讽刺检测模型进行训练。基础模型为 RoBERTuito ,即在西班牙推特上训练的RoBERTa模型。

正类表示讽刺,负类表示非讽刺。

结果

在pysentimiento中评估的四个任务的结果。结果以宏F1分数表示。

model emotion hate_speech irony sentiment
robertuito 0.560 ± 0.010 0.759 ± 0.007 0.739 ± 0.005 0.705 ± 0.003
roberta 0.527 ± 0.015 0.741 ± 0.012 0.721 ± 0.008 0.670 ± 0.006
bertin 0.524 ± 0.007 0.738 ± 0.007 0.713 ± 0.012 0.666 ± 0.005
beto_uncased 0.532 ± 0.012 0.727 ± 0.016 0.701 ± 0.007 0.651 ± 0.006
beto_cased 0.516 ± 0.012 0.724 ± 0.012 0.705 ± 0.009 0.662 ± 0.005
mbert_uncased 0.493 ± 0.010 0.718 ± 0.011 0.681 ± 0.010 0.617 ± 0.003
biGRU 0.264 ± 0.007 0.592 ± 0.018 0.631 ± 0.011 0.585 ± 0.011

注意,对于仇恨言论,这些是Semeval 2019中Subtask B(HS + TR + AG检测)的结果。

引用

如果您在研究中使用此模型,请引用pysentimiento和RoBERTuito论文:

@misc{perez2021pysentimiento,
      title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks},
      author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque},
      year={2021},
      eprint={2106.09462},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
@inproceedings{perez-etal-2022-robertuito,
    title = "{R}o{BERT}uito: a pre-trained language model for social media text in {S}panish",
    author = "P{\'e}rez, Juan Manuel  and
      Furman, Dami{\'a}n Ariel  and
      Alonso Alemany, Laura  and
      Luque, Franco M.",
    booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
    month = jun,
    year = "2022",
    address = "Marseille, France",
    publisher = "European Language Resources Association",
    url = "https://aclanthology.org/2022.lrec-1.785",
    pages = "7235--7243",
    abstract = "Since BERT appeared, Transformer language models and transfer learning have become state-of-the-art for natural language processing tasks. Recently, some works geared towards pre-training specially-crafted models for particular domains, such as scientific papers, medical documents, user-generated texts, among others. These domain-specific models have been shown to improve performance significantly in most tasks; however, for languages other than English, such models are not widely available. In this work, we present RoBERTuito, a pre-trained language model for user-generated text in Spanish, trained on over 500 million tweets. Experiments on a benchmark of tasks involving user-generated text showed that RoBERTuito outperformed other pre-trained language models in Spanish. In addition to this, our model has some cross-lingual abilities, achieving top results for English-Spanish tasks of the Linguistic Code-Switching Evaluation benchmark (LinCE) and also competitive performance against monolingual models in English Twitter tasks. To facilitate further research, we make RoBERTuito publicly available at the HuggingFace model hub together with the dataset used to pre-train it.",
}

@inproceedings{ortega2019overview,
  title={Overview of the task on irony detection in Spanish variants},
  author={Ortega-Bueno, Reynier and Rangel, Francisco and Hern{\'a}ndez Far{\i}as, D and Rosso, Paolo and Montes-y-G{\'o}mez, Manuel and Medina Pagola, Jos{\'e} E},
  booktitle={Proceedings of the Iberian languages evaluation forum (IberLEF 2019), co-located with 34th conference of the Spanish Society for natural language processing (SEPLN 2019). CEUR-WS. org},
  volume={2421},
  pages={229--256},
  year={2019}
}