模型:
togethercomputer/RedPajama-INCITE-7B-Chat
RedPajama-INCITE-7B-Chat是由Together和来自开源AI社区的领导者共同开发的,包括Ontocord.ai、ETH DS3Lab、AAI CERC、蒙特利尔大学、MILA-魁北克AI研究所、斯坦福大学基础模型研究中心(CRFM)、斯坦福大学Hazy研究小组和LAION。
它通过在OASST1和Dolly2上进行微调来增强聊天能力。
请注意,该模型需要transformers版本>=4.25.1。
要输入聊天模型,请使用以下格式:
<human>: [Instruction] <bot>:
这需要一块具有16GB内存的GPU。
import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat", torch_dtype=torch.float16) model = model.to('cuda:0') # infer prompt = "<human>: Who is Alan Turing?\n<bot>:" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) """ Alan Mathison Turing (23 June 1912 7 June 1954) was an English computer scientist, mathematician, logician, cryptanalyst, philosopher, mathematician, and theoretical biologist. """
这需要一块具有12GB内存的GPU。
要使用int8进行推理,请确保已安装accelerate和bitandbytes。您可以使用以下命令安装它们:
pip install accelerate pip install bitsandbytes
然后,您可以按以下方式运行int8推理:
import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True) # infer prompt = "<human>: Who is Alan Turing?\n<bot>:" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) """ Alan Mathison Turing (23 June 1912 – 7 June 1954) was an English computer scientist, mathematician, logician, cryptanalyst, philosopher, and theoretical biologist. """
import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat", torch_dtype=torch.bfloat16) # infer prompt = "<human>: Who is Alan Turing?\n<bot>:" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) """ Alan Mathison Turing, OBE, FRS, (23 June 1912 – 7 June 1954) was an English computer scientist, mathematician, logician, cryptanalyst, philosopher, and theoretical biologist. """
请注意,由于LayerNormKernelImpl未实现CPU的fp16模式,我们在CPU推理中使用bfloat16。
以下为不包括在使用范围内的用途:
最终用户有责任确保以负责任和道德的方式使用模型。
超出范围的使用RedPajama-INCITE-7B-Chat是一个语言模型,除了其预期范围之外,可能在其他用例中表现不佳。例如,它可能不适用于安全关键应用或对个人或社会产生重大影响的决策。重要的是要考虑模型的限制,并仅将其用于预期目的。
误用和恶意使用RedPajama-INCITE-7B-Chat设计用于语言建模,严禁将该模型用于从事非法或不道德的活动,并且违反了该项目的原则。
滥用该模型以残害个人的行为是对该模型的误用。包括但不限于:
RedPajama-INCITE-7B-Chat,像其他语言模型一样,具有一些限制需要考虑。例如,对于复杂、模糊或不在其训练数据范围内的问题,该模型可能无法始终提供准确或相关的答案。因此,我们欢迎个人和组织的贡献,并鼓励合作,共同创建一个更强大和包容的聊天机器人。
训练数据
请参考 togethercomputer/RedPajama-Data-1T
训练过程
加入我们,链接见 Together Discord