模型:

intfloat/simlm-msmarco-reranker

英文

SimLM:基于表示瓶颈的预训练方法用于密集段落检索

论文可在 https://arxiv.org/pdf/2207.02578 处获取

代码可在 https://github.com/microsoft/unilm/tree/master/simlm 处获取

论文摘要

在本文中,我们提出了SimLM(Similarity matching with Language Model pre-training),这是一种简单而有效的密集段落检索预训练方法。它采用了一个简单的瓶颈架构,通过自监督预训练将段落信息压缩为一个密集向量。我们使用了一个替代的语言建模目标,灵感来自ELECTRA,以提高样本效率并减少预训练和微调之间输入分布不匹配的问题。SimLM只需要访问无标签的语料库,并且在没有标记数据或查询时更加广泛适用。我们在几个大规模的段落检索数据集上进行实验,并在各种设置下展示出较强的基线改进。值得注意的是,SimLM甚至胜过了ColBERTv2等多向量方法,而后者需要更多的存储成本。

MS-MARCO段落排序任务结果

Model dev MRR@10 dev R@50 dev R@1k TREC DL 2019 nDCG@10 TREC DL 2020 nDCG@10
SimLM (this model) 43.8 89.2 98.6 74.6 72.7

用法

由于我们使用列表损失来训练重新排序器,相关性得分不受特定数值范围的限制。较高的得分意味着给定查询和段落之间更相关。

从我们的重新排序器获取相关性得分:

import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer, BatchEncoding, PreTrainedTokenizerFast
from transformers.modeling_outputs import SequenceClassifierOutput

def encode(tokenizer: PreTrainedTokenizerFast,
           query: str, passage: str, title: str = '-') -> BatchEncoding:
    return tokenizer(query,
                     text_pair='{}: {}'.format(title, passage),
                     max_length=192,
                     padding=True,
                     truncation=True,
                     return_tensors='pt')

tokenizer = AutoTokenizer.from_pretrained('intfloat/simlm-msmarco-reranker')
model = AutoModelForSequenceClassification.from_pretrained('intfloat/simlm-msmarco-reranker')
model.eval()

with torch.no_grad():
    batch_dict = encode(tokenizer, 'how long is super bowl game', 'The Super Bowl is typically four hours long. The game itself takes about three and a half hours, with a 30 minute halftime show built in.')
    outputs: SequenceClassifierOutput = model(**batch_dict, return_dict=True)
    print(outputs.logits[0])

    batch_dict = encode(tokenizer, 'how long is super bowl game', 'The cost of a Super Bowl commercial runs about $5 million for 30 seconds of airtime. But the benefits that the spot can bring to a brand can help to justify the cost.')
    outputs: SequenceClassifierOutput = model(**batch_dict, return_dict=True)
    print(outputs.logits[0])

引用

@article{Wang2022SimLMPW,
  title={SimLM: Pre-training with Representation Bottleneck for Dense Passage Retrieval},
  author={Liang Wang and Nan Yang and Xiaolong Huang and Binxing Jiao and Linjun Yang and Daxin Jiang and Rangan Majumder and Furu Wei},
  journal={ArXiv},
  year={2022},
  volume={abs/2207.02578}
}