模型:
h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b
该模型是使用 H2O LLM Studio 进行训练的。
要在具有GPU的机器上使用 transformers 库的模型,首先确保已安装 transformers、accelerate、torch 和 einops 库。
pip install transformers==4.29.2 pip install accelerate==0.19.0 pip install torch==2.0.0 pip install einops==0.6.1
import torch from transformers import AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b", use_fast=False, padding_side="left", trust_remote_code=True, ) generate_text = pipeline( model="h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b", tokenizer=tokenizer, torch_dtype=torch.float16, trust_remote_code=True, use_fast=False, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"])
您可以在预处理步骤之后打印样本提示,以查看它被喂给标记器的方式:
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
或者,您可以下载 h2oai_pipeline.py,将其存储在笔记本旁边,并根据加载的模型和标记器构建流水线:
import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b", use_fast=False, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b", torch_dtype=torch.float16, device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"])
您还可以根据加载的模型和标记器自己构建流水线,并考虑预处理步骤:
from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?<|endoftext|><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=False, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.float16, device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer)
RWForCausalLM( (transformer): RWModel( (word_embeddings): Embedding(65024, 4544) (h): ModuleList( (0-31): 32 x DecoderLayer( (input_layernorm): LayerNorm((4544,), eps=1e-05, elementwise_affine=True) (self_attention): Attention( (maybe_rotary): RotaryEmbedding() (query_key_value): Linear(in_features=4544, out_features=4672, bias=False) (dense): Linear(in_features=4544, out_features=4544, bias=False) (attention_dropout): Dropout(p=0.0, inplace=False) ) (mlp): MLP( (dense_h_to_4h): Linear(in_features=4544, out_features=18176, bias=False) (act): GELU(approximate='none') (dense_4h_to_h): Linear(in_features=18176, out_features=4544, bias=False) ) ) ) (ln_f): LayerNorm((4544,), eps=1e-05, elementwise_affine=True) ) (lm_head): Linear(in_features=4544, out_features=65024, bias=False) )
该模型是使用 H2O LLM Studio 和 cfg.yaml 中的配置进行训练的。请访问 H2O LLM Studio 了解如何训练您自己的大型语言模型。
使用 EleutherAI lm-evaluation-harness 进行模型验证的结果。
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
请在使用此存储库中提供的大型语言模型之前仔细阅读此免责声明。您使用该模型即表示您同意以下条款和条件。
使用此存储库中提供的大型语言模型,即表示您同意接受并遵守本免责声明中概述的条款和条件。如果您不同意本免责声明的任何部分,您应避免使用该模型及其生成的任何内容。