英文

GPT2代码自动完成模型

code-autocomplete是一个用于Python的代码补全插件。

code-autocomplete可以使用GPT2自动完成行和块的代码。

用法

开源存储库: code-autocomplete ,支持GPT2模型,用法:

from autocomplete.gpt2_coder import GPT2Coder

m = GPT2Coder("shibing624/code-autocomplete-gpt2-base")
print(m.generate('import torch.nn as')[0])

同样,使用huggingface/transformers:

请使用与'GPT2'相关的函数加载此模型!

import os
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel

os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

tokenizer = GPT2Tokenizer.from_pretrained("shibing624/code-autocomplete-gpt2-base")
model = GPT2LMHeadModel.from_pretrained("shibing624/code-autocomplete-gpt2-base")
model.to(device)
prompts = [
    """from torch import nn
    class LSTM(Module):
        def __init__(self, *,
                     n_tokens: int,
                     embedding_size: int,
                     hidden_size: int,
                     n_layers: int):""",
    """import numpy as np
    import torch
    import torch.nn as""",
    "import java.util.ArrayList",
    "def factorial(n):",
]
for prompt in prompts:
    input_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors='pt').to(device)
    outputs = model.generate(input_ids=input_ids,
                             max_length=64 + len(prompt),
                             temperature=1.0,
                             top_k=50,
                             top_p=0.95,
                             repetition_penalty=1.0,
                             do_sample=True,
                             num_return_sequences=1,
                             length_penalty=2.0,
                             early_stopping=True)
    decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
    print(decoded)
    print("=" * 20)

输出:

from torch import nn
    class LSTM(Module):
        def __init__(self, *,
                     n_tokens: int,
                     embedding_size: int,
                     hidden_size: int,
                     n_layers: int):
            self.embedding_size = embedding_size
====================
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F

模型文件:

code-autocomplete-gpt2-base
├── config.json
├── merges.txt
├── pytorch_model.bin
├── special_tokens_map.json
├── tokenizer_config.json
└── vocab.json

训练数据

pytorch_awesome项目源代码

下载 code-autocomplete

cd autocomplete
python create_dataset.py

如果您想训练code-autocomplete GPT2模型,请参考 https://github.com/shibing624/code-autocomplete/blob/main/autocomplete/gpt2_coder.py

关于GPT2

在此处测试完整的生成能力: https://transformer.huggingface.co/doc/gpt2-large

使用因果语言建模(CLM)目标在英语语言上预训练的模型。它在 this paper 提出,并于 this page 首次发布。

免责声明:发布GPT-2的团队还为其模型编写了 model card 。本模型卡片的内容由Hugging Face团队编写,以完善他们提供的信息并给出特定的偏见示例。

引用

@misc{code-autocomplete,
  author = {Xu Ming},
  title = {code-autocomplete: Code AutoComplete with GPT model},
  year = {2022},
  publisher = {GitHub},
  journal = {GitHub repository},
  url = {https://github.com/shibing624/code-autocomplete},
}