模型:
h2oai/h2ogpt-gm-oasst1-en-1024-20b
该模型是使用 H2O LLM Studio 进行训练的。
在具有GPU的机器上使用transformers库使用这个模型之前,首先确保已经安装了transformers和torch库。
pip install transformers==4.28.1 pip install torch==2.0.0
import torch from transformers import pipeline generate_text = pipeline( model="h2oai/h2ogpt-gm-oasst1-en-1024-20b", torch_dtype=torch.float16, trust_remote_code=True, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.3), repetition_penalty=float(1.2), ) print(res[0]["generated_text"])
你可以在预处理步骤后打印一个示例提示,查看它如何被馈送给分词器:
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
或者,如果你不想使用trust_remote_code=True,你可以下载h2oai_pipeline.py,将其存储在笔记本旁边,并根据加载的模型和分词器构建流水线:
import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-1024-20b", padding_side="left" ) model = AutoModelForCausalLM.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-1024-20b", torch_dtype=torch.float16, device_map={"": "cuda:0"} ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.3), repetition_penalty=float(1.2), ) print(res[0]["generated_text"])
你也可以根据加载的模型和分词器自己构建流水线,并考虑预处理步骤:
from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "h2oai/h2ogpt-gm-oasst1-en-1024-20b" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?<|endoftext|><|answer|>" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.3), repetition_penalty=float(1.2), )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer)
GPTNeoXForCausalLM( (gpt_neox): GPTNeoXModel( (embed_in): Embedding(50432, 6144) (layers): ModuleList( (0-43): 44 x GPTNeoXLayer( (input_layernorm): LayerNorm((6144,), eps=1e-05, elementwise_affine=True) (post_attention_layernorm): LayerNorm((6144,), eps=1e-05, elementwise_affine=True) (attention): GPTNeoXAttention( (rotary_emb): RotaryEmbedding() (query_key_value): Linear(in_features=6144, out_features=18432, bias=True) (dense): Linear(in_features=6144, out_features=6144, bias=True) ) (mlp): GPTNeoXMLP( (dense_h_to_4h): Linear(in_features=6144, out_features=24576, bias=True) (dense_4h_to_h): Linear(in_features=24576, out_features=6144, bias=True) (act): FastGELUActivation() ) ) ) (final_layer_norm): LayerNorm((6144,), eps=1e-05, elementwise_affine=True) ) (embed_out): Linear(in_features=6144, out_features=50432, bias=False) )
该模型是使用H2O LLM Studio进行训练的,并使用cfg.yaml中的配置。访问 H2O LLM Studio 了解如何训练自己的大型语言模型。
使用 EleutherAI lm-evaluation-harness 对模型进行验证的结果。
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=h2oai/h2ogpt-gm-oasst1-en-1024-20b --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
arc_challenge | 0 | acc | 0.3490 | ± | 0.0139 |
acc_norm | 0.3737 | ± | 0.0141 | ||
arc_easy | 0 | acc | 0.6271 | ± | 0.0099 |
acc_norm | 0.5951 | ± | 0.0101 | ||
boolq | 1 | acc | 0.6440 | ± | 0.0084 |
hellaswag | 0 | acc | 0.5366 | ± | 0.0050 |
acc_norm | 0.7173 | ± | 0.0045 | ||
openbookqa | 0 | acc | 0.2920 | ± | 0.0204 |
acc_norm | 0.4160 | ± | 0.0221 | ||
piqa | 0 | acc | 0.7546 | ± | 0.0100 |
acc_norm | 0.7650 | ± | 0.0099 | ||
winogrande | 0 | acc | 0.6527 | ± | 0.0134 |
在使用本存储库提供的大型语言模型之前,请仔细阅读此免责声明。您使用模型即表示您同意以下条款和条件。
通过使用本存储库提供的大型语言模型,您同意接受并遵守本免责声明中概述的条款和条件。如果您不同意本免责声明的任何部分,您应该避免使用模型和由其生成的任何内容。