模型:

sentence-transformers/multi-qa-mpnet-base-cos-v1

英文

multi-qa-mpnet-base-cos-v1

这是一个模型:它将句子和段落映射到768维密集向量空间,旨在进行语义搜索。它经过了对来自不同来源的215M个(问题,答案)对进行的训练。如果想了解语义搜索的介绍,请参考: SBERT.net - Semantic Search

使用方法(Sentence-Transformers)

使用这个模型很简单,当你安装了 sentence-transformers 后:

pip install -U sentence-transformers

然后你可以像这样使用这个模型:

from sentence_transformers import SentenceTransformer, util

query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]

#Load the model
model = SentenceTransformer('sentence-transformers/multi-qa-mpnet-base-cos-v1')

#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)

#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()

#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))

#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)

#Output passages & scores
for doc, score in doc_score_pairs:
    print(score, doc)

使用方法(HuggingFace Transformers)

如果没有 sentence-transformers ,你可以按照以下方式使用模型:首先,将输入传递给变换器模型,然后必须在上下文词嵌入的顶部应用正确的汇合操作。

from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F

#Mean Pooling - Take average of all tokens
def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output.last_hidden_state #First element of model_output contains all token embeddings
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)


#Encode text
def encode(texts):
    # Tokenize sentences
    encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')

    # Compute token embeddings
    with torch.no_grad():
        model_output = model(**encoded_input, return_dict=True)

    # Perform pooling
    embeddings = mean_pooling(model_output, encoded_input['attention_mask'])

    # Normalize embeddings
    embeddings = F.normalize(embeddings, p=2, dim=1)
    
    return embeddings


# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]

# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/multi-qa-mpnet-base-cos-v1")
model = AutoModel.from_pretrained("sentence-transformers/multi-qa-mpnet-base-cos-v1")

#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)

#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()

#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))

#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)

#Output passages & scores
for doc, score in doc_score_pairs:
    print(score, doc)

技术细节

以下是关于如何使用这个模型的一些技术细节:

Setting Value
Dimensions 768
Produces normalized embeddings Yes
Pooling-Method Mean pooling
Suitable score functions dot-product ( util.dot_score ), cosine-similarity ( util.cos_sim ), or euclidean distance

注意:当使用sentence-transformers加载时,该模型会产生长度为1的标准化嵌入。在这种情况下,点积和余弦相似度是等价的。点积更快,因此更受推荐。欧氏距离与点积成正比,也可以使用。

背景

该项目旨在使用自监督的对比学习目标在非常大的句子级数据集上训练句子嵌入模型。我们使用对比学习目标:给定一对句子中的一个句子,模型应预测在我们的数据集中实际与其配对的一组随机抽样的其他句子。

我们在Hugging Face组织的 Community week using JAX/Flax for NLP & CV 中开发了这个模型,作为 Train the Best Sentence Embedding Model Ever with 1B Training Pairs 项目的一部分。我们从Google的Flax、JAX和Cloud团队成员那里获得了关于高效深度学习框架的干预,同时还获得了运行该项目的高效硬件基础设施,即7个TPU V3-8。

预期用途

我们的模型旨在用于语义搜索:它将查询/问题和文本段落编码为密集的向量空间。它找到了给定段落的相关文档。

请注意,最多只能处理512个词块的文本:超过这个长度的文本将被截断。此外,请注意该模型仅对最多250个词块的输入文本进行了训练。对于更长的文本,它可能效果不好。

训练过程

完整的训练脚本可以在当前存储库中找到:train_script.py。

预训练

我们使用了预训练的 mpnet-base 模型。有关预训练过程的详细信息,请参考模型卡片。

Training

我们使用多个数据集的连接来微调我们的模型。总共有约215M个(问题,答案)对。我们按照数据配置文件data_config.json中详细说明的加权概率对每个数据集进行了采样。

该模型使用Mean-pooling、余弦相似度作为相似度函数,并使用缩放因子20进行了 MultipleNegativesRankingLoss 次训练。

Dataset Number of training tuples
12311321 Duplicate question pairs from WikiAnswers 77,427,422
12312321 Automatically generated (Question, Paragraph) pairs for each paragraph in Wikipedia 64,371,441
12313321 (Title, Body) pairs from all StackExchanges 25,316,456
12313321 (Title, Answer) pairs from all StackExchanges 21,396,559
12315321 Triplets (query, answer, hard_negative) for 500k queries from Bing search engine 17,579,773
12316321 (query, answer) pairs for 3M Google queries and Google featured snippet 3,012,496
12317321 (Question, Answer) pairs from Amazon product pages 2,448,839
12318321 (Title, Answer) pairs from Yahoo Answers 1,198,260
12318321 (Question, Answer) pairs from Yahoo Answers 681,164
12318321 (Title, Question) pairs from Yahoo Answers 659,896
12321321 (Question, Answer) pairs for 140k questions, each with Top5 Google snippets on that question 582,261
12322321 (Question, Answer) pairs from Reddit ELI5 (explainlikeimfive) 325,475
12313321 Duplicate questions pairs (titles) 304,525
12324321 (Question, Duplicate_Question, Hard_Negative) triplets for Quora Questions Pairs dataset 103,663
12325321 (Question, Paragraph) pairs for 100k real Google queries with relevant Wikipedia paragraph 100,231
12326321 (Question, Paragraph) pairs from SQuAD2.0 dataset 87,599
12327321 (Question, Evidence) pairs 73,346
Total 214,988,242