模型:
h2oai/h2ogpt-gm-oasst1-multilang-1024-20b
该模型使用 H2O LLM Studio 进行训练。
在具备GPU的机器上,要使用transformers库的该模型,请确保已安装transformers和torch库。
pip install transformers==4.28.1 pip install torch==2.0.0
import torch
from transformers import pipeline
generate_text = pipeline(
model="h2oai/h2ogpt-gm-oasst1-multilang-1024-20b",
torch_dtype=torch.float16,
trust_remote_code=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
)
print(res[0]["generated_text"])
可以在预处理步骤后打印样本提示以查看如何将其输入tokenizer:
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
或者,如果您希望不使用trust_remote_code=True,可以下载h2oai_pipeline.py,将其与笔记本放在一起,并根据加载的模型和tokenizer构建流水线:
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-multilang-1024-20b",
padding_side="left"
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-multilang-1024-20b",
torch_dtype=torch.float16,
device_map={"": "cuda:0"}
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
)
print(res[0]["generated_text"])
您还可以根据加载的模型和tokenizer自己构建流水线,并考虑预处理步骤:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "h2oai/h2ogpt-gm-oasst1-multilang-1024-20b" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(50432, 6144)
(layers): ModuleList(
(0-43): 44 x GPTNeoXLayer(
(input_layernorm): LayerNorm((6144,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((6144,), eps=1e-05, elementwise_affine=True)
(attention): GPTNeoXAttention(
(rotary_emb): RotaryEmbedding()
(query_key_value): Linear(in_features=6144, out_features=18432, bias=True)
(dense): Linear(in_features=6144, out_features=6144, bias=True)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=6144, out_features=24576, bias=True)
(dense_4h_to_h): Linear(in_features=24576, out_features=6144, bias=True)
(act): FastGELUActivation()
)
)
)
(final_layer_norm): LayerNorm((6144,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=6144, out_features=50432, bias=False)
)
该模型使用H2O LLM Studio进行训练,并使用cfg.yaml中的配置。访问 H2O LLM Studio 以了解如何训练您自己的大语言模型。
使用 EleutherAI lm-evaluation-harness 进行的模型验证结果。
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=h2oai/h2ogpt-gm-oasst1-multilang-1024-20b --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| arc_challenge | 0 | acc | 0.3447 | ± | 0.0139 |
| acc_norm | 0.3823 | ± | 0.0142 | ||
| arc_easy | 0 | acc | 0.6423 | ± | 0.0098 |
| acc_norm | 0.5913 | ± | 0.0101 | ||
| boolq | 1 | acc | 0.6517 | ± | 0.0083 |
| hellaswag | 0 | acc | 0.5374 | ± | 0.0050 |
| acc_norm | 0.7185 | ± | 0.0045 | ||
| openbookqa | 0 | acc | 0.2920 | ± | 0.0204 |
| acc_norm | 0.4100 | ± | 0.0220 | ||
| piqa | 0 | acc | 0.7655 | ± | 0.0099 |
| acc_norm | 0.7753 | ± | 0.0097 | ||
| winogrande | 0 | acc | 0.6677 | ± | 0.0132 |
在使用此存储库中提供的大语言模型之前,请仔细阅读此免责声明。您对该模型的使用表示您同意以下条款和条件。
使用此存储库中提供的大语言模型,即表示您同意接受并遵守本免责声明中概述的条款和条件。如果您不同意本免责声明的任何部分,则应避免使用该模型及其生成的任何内容。