模型:
optimum/bert-base-NER
bert-base-NER 是一个已经过微调的BERT模型,可用于命名实体识别,并在NER任务中实现了最先进的性能。它已经训练用于识别四种类型的实体:位置(LOC),组织(ORG),个人(PER)和其他(MISC)。
具体来说,该模型是在标准数据集 CoNLL-2003 Named Entity Recognition 的英文版本上微调的bert-base-cased模型。
如果您想使用在相同数据集上微调的更大的BERT-large模型,也有一个版本 bert-large-NER 可用。
您可以使用Transformers NER pipeline与此模型一起使用。
from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER") model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "My name is Wolfgang and I live in Berlin" ner_results = nlp(example) print(ner_results)限制和偏见
该模型受其特定时间段的新闻文章实体注释训练数据集的限制。这可能对不同领域中的所有用例都不具有很好的泛化能力。此外,模型有时将子词令牌标记为实体,可能需要对结果进行后处理以处理这些情况。
该模型在标准数据集 CoNLL-2003 Named Entity Recognition 的英文版本上进行了微调。
训练数据集区分实体的开始和连续性,以便如果有相同类型的连续实体,模型可以输出第二个实体的开始位置。与数据集一样,每个令牌将被分类为以下类别之一:
Abbreviation | Description |
---|---|
O | Outside of a named entity |
B-MIS | Beginning of a miscellaneous entity right after another miscellaneous entity |
I-MIS | Miscellaneous entity |
B-PER | Beginning of a person’s name right after another person’s name |
I-PER | Person’s name |
B-ORG | Beginning of an organization right after another organization |
I-ORG | organization |
B-LOC | Beginning of a location right after another location |
I-LOC | Location |
该数据集来自路透社语料库,其中包含路透社新闻故事。您可以在CoNLL-2003论文中阅读有关该数据集的创建方式的更多信息。
#每种实体类型的训练示例数Dataset | LOC | MISC | ORG | PER |
---|---|---|---|---|
Train | 7140 | 3438 | 6321 | 6600 |
Dev | 1837 | 922 | 1341 | 1842 |
Test | 1668 | 702 | 1661 | 1617 |
Dataset | Articles | Sentences | Tokens |
---|---|---|---|
Train | 946 | 14,987 | 203,621 |
Dev | 216 | 3,466 | 51,362 |
Test | 231 | 3,684 | 46,435 |
该模型在一台NVIDIA V100 GPU上使用了来自 original BERT paper 的推荐超参数进行训练,该超参数在CoNLL-2003 NER任务上对模型进行了训练和评估。
metric | dev | test |
---|---|---|
f1 | 95.1 | 91.3 |
precision | 95.0 | 90.7 |
recall | 95.3 | 91.9 |
测试指标略低于官方Google BERT结果,其编码文档上下文并尝试CRF。有关复制原始结果 here 的更多信息。
@article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
@inproceedings{tjong-kim-sang-de-meulder-2003-introduction, title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition", author = "Tjong Kim Sang, Erik F. and De Meulder, Fien", booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003", year = "2003", url = "https://www.aclweb.org/anthology/W03-0419", pages = "142--147", }