模型:

shibing624/code-autocomplete-distilgpt2-python

英文

GPT2代码自动补全模型

code-autocomplete是一个用于Python的代码提示插件。

code-autocomplete可以使用GPT2自动完成代码行和代码块的编写。

用法

开源仓库: code-autocomplete ,支持GPT2模型,用法:

from autocomplete.gpt2_coder import GPT2Coder

m = GPT2Coder("shibing624/code-autocomplete-distilgpt2-python")
print(m.generate('import torch.nn as')[0])

同样,使用huggingface/transformers:

请使用与'GPT2'相关的函数来加载此模型!

import os
from transformers import GPT2Tokenizer, GPT2LMHeadModel

os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"

tokenizer = GPT2Tokenizer.from_pretrained("shibing624/code-autocomplete-distilgpt2-python")
model = GPT2LMHeadModel.from_pretrained("shibing624/code-autocomplete-distilgpt2-python")

prompts = [
    """from torch import nn
    class LSTM(Module):
        def __init__(self, *,
                     n_tokens: int,
                     embedding_size: int,
                     hidden_size: int,
                     n_layers: int):""",
    """import numpy as np
    import torch
    import torch.nn as""",
    "import java.util.ArrayList",
    "def factorial(n):",
]
for prompt in prompts:
    input_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors='pt')
    outputs = model.generate(input_ids=input_ids,
                             max_length=64 + len(prompt),
                             temperature=1.0,
                             top_k=50,
                             top_p=0.95,
                             repetition_penalty=1.0,
                             do_sample=True,
                             num_return_sequences=1,
                             length_penalty=2.0,
                             early_stopping=True)
    decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
    print(decoded)
    print("=" * 20)

输出:

from torch import nn
    class LSTM(Module):
        def __init__(self, *,
                     n_tokens: int,
                     embedding_size: int,
                     hidden_size: int,
                     n_layers: int):
            self.embedding_size = embedding_size
====================
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F

模型文件:

code-autocomplete-distilgpt2-python
├── config.json
├── merges.txt
├── pytorch_model.bin
├── special_tokens_map.json
├── tokenizer_config.json
└── vocab.json

训练数据

pytorch_awesome项目源代码

下载 code-autocomplete

cd autocomplete
python create_dataset.py

如果你想训练code-autocomplete GPT2模型,请参考 https://github.com/shibing624/code-autocomplete/blob/main/autocomplete/gpt2_coder.py

关于GPT2

在此处测试整个生成能力: https://transformer.huggingface.co/doc/gpt2-large

预训练模型使用有因果语言建模(CLM)目标的英语语言。它在 this paper 中被引入,并在 this page 首次发布。

声明:GPT-2发布团队还为他们的模型撰写了一份 model card 。Hugging Face团队编写了此模型卡的内容,以补充他们提供的信息,并给出了特定的偏见示例。

引用

@misc{code-autocomplete,
  author = {Xu Ming},
  title = {code-autocomplete: Code AutoComplete with GPT model},
  year = {2022},
  publisher = {GitHub},
  journal = {GitHub repository},
  url = {https://github.com/shibing624/code-autocomplete},
}