模型:

flax-sentence-embeddings/st-codesearch-distilroberta-base

英文

flax-sentence-embeddings/st-codesearch-distilroberta-base

这是一个 sentence-transformers 模型:它将句子和段落映射到一个768维的稠密向量空间,可用于聚类或语义搜索等任务。

它是在 code_search_net 数据集上训练的,可用于通过文本搜索程序代码。

使用方法:

from sentence_transformers import SentenceTransformer, util


#This list the defines the different programm codes
code = ["""def sort_list(x):
   return sorted(x)""",
"""def count_above_threshold(elements, threshold=0):
    counter = 0
    for e in elements:
        if e > threshold:
            counter += 1
    return counter""",
"""def find_min_max(elements):
    min_ele = 99999
    max_ele = -99999
    for e in elements:
        if e < min_ele:
            min_ele = e
        if e > max_ele:
            max_ele = e
    return min_ele, max_ele"""]
    

model = SentenceTransformer("flax-sentence-embeddings/st-codesearch-distilroberta-base")

# Encode our code into the vector space
code_emb = model.encode(code, convert_to_tensor=True)

# Interactive demo: Enter queries, and the method returns the best function from the 
# 3 functions we defined
while True:
    query = input("Query: ")
    query_emb = model.encode(query, convert_to_tensor=True)
    hits = util.semantic_search(query_emb, code_emb)[0]
    top_hit = hits[0]

    print("Cossim: {:.2f}".format(top_hit['score']))
    print(code[top_hit['corpus_id']])
    print("\n\n")

使用方法(Sentence-Transformers)

当您安装了 sentence-transformers 后,使用这个模型将变得很容易:

pip install -U sentence-transformers

然后,您可以像这样使用模型:

from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]

model = SentenceTransformer('flax-sentence-embeddings/st-codesearch-distilroberta-base')
embeddings = model.encode(sentences)
print(embeddings)

训练

该模型使用了DistilRoBERTa-base模型进行了10k个训练步骤,使用了批大小256和MultipleNegativesRankingLoss。

这是一个初步的模型。它既没有经过测试,也没有进行过高级训练。

模型的训练参数如下:

DataLoader:

MultiDatasetDataLoader.MultiDatasetDataLoader,长度为5371,参数为:

{'batch_size': 256}

Loss:

sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss,参数为:

{'scale': 20, 'similarity_fct': 'dot_score'}

fit()方法的参数:

{
    "callback": null,
    "epochs": 1,
    "evaluation_steps": 0,
    "evaluator": "NoneType",
    "max_grad_norm": 1,
    "optimizer_class": "<class 'transformers.optimization.AdamW'>",
    "optimizer_params": {
        "lr": 2e-05
    },
    "scheduler": "warmupconstant",
    "steps_per_epoch": 10000,
    "warmup_steps": 500,
    "weight_decay": 0.01
}

完整的模型架构

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
  (2): Normalize()
)

引用与作者