模型:
valhalla/longformer-base-4096-finetuned-squadv1
这是LONGFORMER-BASE-4096模型在SQuAD v1数据集上进行问答任务微调的结果。
Longformer 由Iz Beltagy、Matthew E. Peters、Arman Coha从AllenAI创建。正如论文中解释的那样。
Longformer是一种适用于长文档的类似BERT的模型。
预训练模型可以处理长度最多为4096的序列。
该模型是在Google Colab的v100 GPU上训练的。您可以在这里找到微调Colab链接。
在训练长文档问答任务时,有几点需要注意,默认情况下,LONGFORMER对所有标记使用滑动窗口局部注意力。但是对于问答任务,所有问题标记应该采用全局注意力。有关详细信息,请参阅论文。LongformerForQuestionAnswering模型会为您自动处理。为了允许它这样做:
Metric | # Value |
---|---|
Exact Match | 85.1466 |
F1 | 91.5415 |
import torch from transformers import AutoTokenizer, AutoModelForQuestionAnswering, tokenizer = AutoTokenizer.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1") model = AutoModelForQuestionAnswering.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1") text = "Huggingface has democratized NLP. Huge thanks to Huggingface for this." question = "What has Huggingface done ?" encoding = tokenizer(question, text, return_tensors="pt") input_ids = encoding["input_ids"] # default is local attention everywhere # the forward method will automatically set global attention on question tokens attention_mask = encoding["attention_mask"] start_scores, end_scores = model(input_ids, attention_mask=attention_mask) all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist()) answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1] answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens)) # output => democratized NLP
LongformerForQuestionAnswering尚未在pipeline中提供支持。一旦支持被添加,我将更新这张卡片。