模型:

intfloat/simlm-base-msmarco-finetuned

语言:

en

其他:

bert

预印本库:

arxiv:2207.02578

许可:

mit
英文

SimLM: 使用表示瓶颈进行预训练的稠密碎片检索

可在 https://arxiv.org/pdf/2207.02578 找到论文

可在 https://github.com/microsoft/unilm/tree/master/simlm 找到代码

论文摘要

在本文中,我们提出了SimLM(带有语言模型预训练的相似性匹配),一种简单而有效的稠密碎片检索预训练方法。它采用了一种简单的瓶颈架构,通过自监督预训练学习将碎片信息压缩成稠密向量。我们使用了一种替代语言建模目标,受ELECTRA启发,以提高样本效率并减少预训练和微调之间输入分布不匹配。SimLM只需要访问未标记的语料库,并且在没有标记数据或查询时更广泛适用。我们对几个大规模碎片检索数据集进行了实验,在各种设置下都显示出显著的改进。值得注意的是,SimLM甚至超过了像ColBERTv2这样的多向量方法,后者需要更多的存储成本。

在MS-MARCO碎片排名任务上的结果

Model dev MRR@10 dev R@50 dev R@1k TREC DL 2019 nDCG@10 TREC DL 2020 nDCG@10
RocketQAv2 38.8 86.2 98.1 - -
coCondenser 38.2 86.5 98.4 71.7 68.4
ColBERTv2 39.7 86.8 98.4 - -
SimLM (this model) 41.1 87.8 98.7 71.4 69.7

使用方法

从我们的微调模型中获取嵌入:

import torch
from transformers import AutoModel, AutoTokenizer, BatchEncoding, PreTrainedTokenizerFast
from transformers.modeling_outputs import BaseModelOutput

def l2_normalize(x: torch.Tensor):
    return torch.nn.functional.normalize(x, p=2, dim=-1)

def encode_query(tokenizer: PreTrainedTokenizerFast, query: str) -> BatchEncoding:
    return tokenizer(query,
                     max_length=32,
                     padding=True,
                     truncation=True,
                     return_tensors='pt')

def encode_passage(tokenizer: PreTrainedTokenizerFast, passage: str, title: str = '-') -> BatchEncoding:
    return tokenizer(title,
                     text_pair=passage,
                     max_length=144,
                     padding=True,
                     truncation=True,
                     return_tensors='pt')

tokenizer = AutoTokenizer.from_pretrained('intfloat/simlm-base-msmarco-finetuned')
model = AutoModel.from_pretrained('intfloat/simlm-base-msmarco-finetuned')
model.eval()

with torch.no_grad():
    query_batch_dict = encode_query(tokenizer, 'what is qa')
    outputs: BaseModelOutput = model(**query_batch_dict, return_dict=True)
    query_embedding = l2_normalize(outputs.last_hidden_state[0, 0, :])

    psg1 = 'Quality assurance (QA) is a process-centered approach to ensuring that a company or organization is providing the best possible products or services. It is related to quality control, which focuses on the end result, such as testing a sample of items from a batch after production.'
    psg1_batch_dict = encode_passage(tokenizer, psg1)
    outputs: BaseModelOutput = model(**psg1_batch_dict, return_dict=True)
    psg1_embedding = l2_normalize(outputs.last_hidden_state[0, 0, :])

    psg2 = 'The Super Bowl is typically four hours long. The game itself takes about three and a half hours, with a 30 minute halftime show built in.'
    psg2_batch_dict = encode_passage(tokenizer, psg2)
    outputs: BaseModelOutput = model(**psg2_batch_dict, return_dict=True)
    psg2_embedding = l2_normalize(outputs.last_hidden_state[0, 0, :])

    # Higher cosine similarity means they are more relevant
    print(query_embedding.dot(psg1_embedding), query_embedding.dot(psg2_embedding))

引用

@article{Wang2022SimLMPW,
  title={SimLM: Pre-training with Representation Bottleneck for Dense Passage Retrieval},
  author={Liang Wang and Nan Yang and Xiaolong Huang and Binxing Jiao and Linjun Yang and Daxin Jiang and Rangan Majumder and Furu Wei},
  journal={ArXiv},
  year={2022},
  volume={abs/2207.02578}
}