模型:
shibing624/text2vec-base-multilingual
这是一个CoSENT(余弦句子)模型:shibing624/text2vec-base-multilingual。
它将句子映射到一个384维的稠密向量空间,可用于句子嵌入、文本匹配或语义搜索等任务。
要对此模型进行自动评估,请参见评估基准: text2vec
可用语言:de、en、es、fr、it、nl、pl、pt、ru、zh
Arch | BaseModel | Model | ATEC | BQ | LCQMC | PAWSX | STS-B | SOHU-dd | SOHU-dc | Avg | QPS |
---|---|---|---|---|---|---|---|---|---|---|---|
Word2Vec | word2vec | 12310321 | 20.00 | 31.49 | 59.46 | 2.57 | 55.78 | 55.04 | 20.70 | 35.03 | 23769 |
SBERT | xlm-roberta-base | 12311321 | 18.42 | 38.52 | 63.96 | 10.14 | 78.90 | 63.01 | 52.28 | 46.46 | 3138 |
Instructor | hfl/chinese-roberta-wwm-ext | 12312321 | 41.27 | 63.81 | 74.87 | 12.20 | 76.96 | 75.83 | 60.55 | 57.93 | 2980 |
CoSENT | hfl/chinese-macbert-base | 12313321 | 31.93 | 42.67 | 70.16 | 17.21 | 79.30 | 70.27 | 50.42 | 51.61 | 3008 |
CoSENT | hfl/chinese-lert-large | 12314321 | 32.61 | 44.59 | 69.30 | 14.51 | 79.44 | 73.01 | 59.04 | 53.12 | 2092 |
CoSENT | nghuyong/ernie-3.0-base-zh | 12315321 | 43.37 | 61.43 | 73.48 | 38.90 | 78.25 | 70.60 | 53.08 | 59.87 | 3089 |
CoSENT | nghuyong/ernie-3.0-base-zh | 12316321 | 44.89 | 63.58 | 74.24 | 40.90 | 78.93 | 76.70 | 63.30 | 63.08 | 3066 |
CoSENT | sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 | 12317321 | 32.39 | 50.33 | 65.64 | 32.56 | 74.45 | 68.88 | 51.17 | 53.67 | 4004 |
说明:
模型训练实验报告: 实验报告
使用该模型需要安装 text2vec :
pip install -U text2vec
然后可以像这样使用模型:
from text2vec import SentenceModel sentences = ['如何更换花呗绑定银行卡', 'How to replace the Huabei bundled bank card'] model = SentenceModel('shibing624/text2vec-base-multilingual') embeddings = model.encode(sentences) print(embeddings)
在没有 text2vec 的情况下,可以这样使用模型:
首先,将输入文本通过transformer模型,然后在上下文词嵌入之上应用正确的池化操作。
安装transformers:
pip install transformers
然后加载模型并进行预测:
from transformers import AutoTokenizer, AutoModel import torch # Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] # First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('shibing624/text2vec-base-multilingual') model = AutoModel.from_pretrained('shibing624/text2vec-base-multilingual') sentences = ['如何更换花呗绑定银行卡', 'How to replace the Huabei bundled bank card'] # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings)
sentence-transformers 是一个常用的计算句子的稠密向量表示的库。
安装sentence-transformers:
pip install -U sentence-transformers
然后加载模型并进行预测:
from sentence_transformers import SentenceTransformer m = SentenceTransformer("shibing624/text2vec-base-multilingual") sentences = ['如何更换花呗绑定银行卡', 'How to replace the Huabei bundled bank card'] sentence_embeddings = m.encode(sentences) print("Sentence embeddings:") print(sentence_embeddings)
CoSENT( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_mean_tokens': True}) )
我们的模型旨在用作句子和短段落编码器。给定输入文本,它输出一个捕捉语义信息的向量。句子向量可用于信息检索、聚类或句子相似性任务。
默认情况下,超过256个单词片段的输入文本将被截断。
我们使用了预训练的 sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 模型。有关预训练过程的详细信息,请参阅模型卡片。
我们使用对比目标进行微调。具体来说,我们从批次中的每个可能的句子对计算余弦相似度。然后,我们与真实对和虚假对进行比较,应用排序损失。
此模型由 text2vec 进行训练。
如果您觉得这个模型有用,请随意引用:
@software{text2vec, author = {Ming Xu}, title = {text2vec: A Tool for Text to Vector}, year = {2023}, url = {https://github.com/shibing624/text2vec}, }