英文

tner/roberta-large-conll2003

这个模型是在 tner/conll2003 数据集上,通过 T-NER hyper-parameter search 进行 fine-tuning 得到的。模型在测试集上取得以下结果:

  • F1 (micro): 0.924769027716674
  • Precision (micro): 0.9191883855168795
  • Recall (micro): 0.9304178470254958
  • F1 (macro): 0.9110950780089749
  • Precision (macro): 0.9030546238754271
  • Recall (macro): 0.9197126371122274

测试集上F1 score的每个实体的分数如下所示:

  • location: 0.9390573401380967
  • organization: 0.9107142857142857
  • other: 0.8247422680412372
  • person: 0.9698664181422801

F1 score的置信区间如下所示:

  • F1 (micro):
    • 90%: [0.9185189408755685, 0.9309806929048586]
    • 95%: [0.9174010190551032, 0.9318590917100465]
  • F1 (macro):
    • 90%: [0.9185189408755685, 0.9309806929048586]
    • 95%: [0.9174010190551032, 0.9318590917100465]

完整的评估结果可以在 metric file of NER metric file of entity span 找到。

用法

可以通过 tner library 使用这个模型。通过pip安装库

pip install tner

然后按以下方式激活模型。

from tner import TransformersNER
model = TransformersNER("tner/roberta-large-conll2003")
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])

也可以通过transformers库使用,但目前不支持CRF层,不建议使用。

训练超参数

训练过程中使用了以下超参数:

  • 数据集: ['tner/conll2003']
  • 数据集拆分: train
  • 数据集名称: None
  • 本地数据集: None
  • 模型: roberta-large
  • crf: True
  • 最大长度: 128
  • epoch: 17
  • 批大小: 64
  • 学习率: 1e-05
  • 随机种子: 42
  • 梯度累积步数: 1
  • 权重衰减: None
  • 学习率预热步长比例: 0.1
  • 最大梯度范数: 10.0

完整的配置可以在 fine-tuning parameter file 找到。

参考文献

如果您使用了T-NER的任何资源,请考虑引用我们的 paper

@inproceedings{ushio-camacho-collados-2021-ner,
    title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
    author = "Ushio, Asahi  and
      Camacho-Collados, Jose",
    booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
    month = apr,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.eacl-demos.7",
    doi = "10.18653/v1/2021.eacl-demos.7",
    pages = "53--62",
    abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}