英文

VMware/open-llama-0.3T-7B-open-instruct-v1.1

UPDATE: 最终版本现已可用!

请使用最终版本: Open LLaMA 7B Open Instruct

许可证

术语

  • 模型:Open-llama
  • 模型训练:300B或0.3T令牌长度
  • 模型大小:7B参数
  • 数据集:Open-instruct-v1.1(oasst,dolly,hhrlhf)
  • 版本:V1(Alpaca Prompt模板)

在Transformers中的使用

import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = 'VMware/open-llama-0.3T-7B-open-instruct-v1.1'


tokenizer = AutoTokenizer.from_pretrained(model_name)

model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype= torch.float16, device_map = 'sequential')

prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"

prompt=  'Explain in simple terms how the attention mechanism of a transformer model works'


inputt = prompt_template.format(instruction= prompt)
input_ids = tokenizer(inputt, return_tensors="pt").input_ids.to("cuda")

output1 = model.generate(input_ids, max_length=512)
input_length = input_ids.shape[1]
output1 = output1[:, input_length:]
output= tokenizer.decode(output1[0])

print(output)

'''
The attention mechanism of a transformer model is designed to help the model understand the relationship between different parts of a sentence.
The model uses a weighted attention score to determine how much each input token contributes to the output.
The attention score is calculated by looking at the similarity between each input token and the output token,and assigning a weight to each input token based on this similarity.
This way, the model can better understand the relationship between different parts of a sentence and generate more accurate predictions.

'''

缺点

  • 该模型是在部分训练的Open-LLaMA检查点(300B令牌或30%训练生命周期)上进行训练的,完全训练的Open-LLaMA检查点在训练时有很大的改进潜力
  • 从我们观察到的情况来看,该模型对于少样本提示存在困难(我们计划在未来的迭代中进行改进)
  • 当要求提供代码时,它可能或可能不包括Markdown格式的代码(``)
  • 它不会缩进Python代码

评估

待办事项