模型:
mrm8488/spanbert-finetuned-squadv1
由 Facebook Research 创建,并针对Q&A下游任务在 SQuAD 1.1 上进行了微调。
这是一种预训练方法,旨在更好地表示和预测文本片段。
SpanBERT: Improving Pre-training by Representing and Predicting Spans
SQuAD 1.1 包含500多篇文章上的100,000个问题-答案对。
Dataset | Split | # samples |
---|---|---|
SQuAD1.1 | train | 87.7k |
SQuAD1.1 | eval | 10.6k |
该模型在Tesla P100 GPU和25GB RAM上进行了训练。有关微调的脚本可以在 here 中找到
Metric | # Value |
---|---|
EM | 85.49 |
F1 | 91.98 |
{ "exact": 85.49668874172185, "f1": 91.9845699540379, "total": 10570, "HasAns_exact": 85.49668874172185, "HasAns_f1": 91.9845699540379, "HasAns_total": 10570, "best_exact": 85.49668874172185, "best_exact_thresh": 0.0, "best_f1": 91.9845699540379, "best_f1_thresh": 0.0 }
Model | EM | F1 score |
---|---|---|
1238321 | - | 92.4* |
1239321 | 85.49 | 91.98 |
使用pipeline可以快速使用:
from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/spanbert-finetuned-squadv1", tokenizer="mrm8488/spanbert-finetuned-squadv1" ) qa_pipeline({ 'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately", 'question': "Who has been working hard for hugginface/transformers lately?" })
由 Manuel Romero/@mrm8488 | LinkedIn 创建
在西班牙用 ♥ 制作