模型:
h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-3b
该模型是使用 H2O LLM Studio 进行训练的。
在使用具有GPU的机器上的transformers库之前,请确保已安装transformers,accelerate和torch库。
pip install transformers==4.30.2
pip install accelerate==0.20.3
pip install torch==2.0.0
import torch
from transformers import pipeline
generate_text = pipeline(
model="h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-3b",
torch_dtype="auto",
trust_remote_code=True,
use_fast=False,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
您可以在预处理步骤之后打印一个示例提示,以查看它如何被输入到分词器:
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
<|prompt|>Why is drinking water so healthy?</s><|answer|>
或者,您可以下载h2oai_pipeline.py,将其存储在您的笔记本旁边,并从加载的模型和分词器构建管道。如果该模型和分词器在transformers包中得到完全支持,这将允许您设置trust_remote_code=False。
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-3b",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-3b",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
您还可以自己从加载的模型和分词器构建管道,并考虑预处理步骤:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-3b" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=False,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 3200, padding_idx=0)
(layers): ModuleList(
(0-25): 26 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=3200, out_features=3200, bias=False)
(k_proj): Linear(in_features=3200, out_features=3200, bias=False)
(v_proj): Linear(in_features=3200, out_features=3200, bias=False)
(o_proj): Linear(in_features=3200, out_features=3200, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=3200, out_features=8640, bias=False)
(down_proj): Linear(in_features=8640, out_features=3200, bias=False)
(up_proj): Linear(in_features=3200, out_features=8640, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=3200, out_features=32000, bias=False)
)
该模型是使用H2O LLM Studio进行训练的,并使用cfg.yaml中的配置。访问 H2O LLM Studio 了解如何训练您自己的大语言模型。
在使用本存储库提供的大语言模型之前,请仔细阅读此免责声明。您使用该模型即表示您同意以下条款和条件。
使用本存储库提供的大语言模型,即表示您同意接受并遵守本免责声明中概述的条款和条件。如果您不同意本免责声明的任何部分,您应避免使用该模型和由其生成的任何内容。