模型:
TheBloke/WizardLM-13B-V1.1-GPTQ
Chat & support: my new Discord server
Want to contribute? TheBloke's Patreon page
这些文件是 WizardLM's WizardLM 13B V1.1 的GPTQ模型文件。
提供了多个GPTQ参数排列方式;有关提供的选项、其参数以及用于创建它们的软件的详细信息,请参阅下面的提供的文件部分。
这些模型使用 Latitude.sh 慷慨提供的硬件进行量化。
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
提供了多个量化参数,以允许您选择最适合您硬件和要求的参数。
每个单独的量化在不同的分支中。详细了解从不同分支获取的说明请参阅下面的内容。
Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
---|---|---|---|---|---|---|---|
main | 4 | 128 | False | 7.45 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order androup size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
gptq-8bit--1g-actorder_True | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
gptq-8bit-128g-actorder_False | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GPTQ`
请确保您正在使用最新版本的 text-generation-webui 。
强烈推荐使用text-generation-webui的一键安装程序,除非您知道如何进行手动安装。
首先确保您已安装 AutoGPTQ :
GITHUB_ACTIONS=true pip install auto-gptq
然后尝试以下示例代码:
from transformers import AutoTokenizer, pipeline, logging from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig model_name_or_path = "TheBloke/WizardLM-13B-V1.1-GPTQ" model_basename = "wizardlm-13b-v1.1-GPTQ-4bit-128g.no-act.order" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, model_basename=model_basename use_safetensors=True, trust_remote_code=True, device="cuda:0", use_triton=use_triton, quantize_config=None) """ To download from a specific branch, use the revision parameter, as in this example: model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, revision="gptq-4bit-32g-actorder_True", model_basename=model_basename, use_safetensors=True, trust_remote_code=True, device="cuda:0", quantize_config=None) """ prompt = "Tell me about AI" prompt_template=f'''A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline # Prevent printing spurious transformers error when using pipeline with AutoGPTQ logging.set_verbosity(logging.CRITICAL) print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text'])
提供的文件将与AutoGPTQ(CUDA和Triton模式)、GPTQ-for-LLaMa(仅测试过CUDA)和Occ4m的GPTQ-for-LLaMa分支一起使用。
ExLlama与4位Bit表示的LLama模型兼容。有关每个文件的兼容性,请参阅上面提供的文件表。
有关进一步支持以及有关这些模型和人工智能的讨论,请加入我们:
感谢 chirper.ai 团队!
我收到了很多人们询问是否可以做出贡献的问题。我喜欢提供模型和帮助他人,非常愿意能够花更多的时间进行这些工作,并扩展到新的项目,如微调/训练。
如果您能并且愿意进行贡献,我将非常感激,并将帮助我继续提供更多的模型,并开始进行新的人工智能项目。
捐赠者将获得在任何与AI/LLM/模型相关的问题和请求上的优先支持,可以访问一个私人的Discord房间,以及其他好处。
特别感谢: CarbonQuill的Luke,Aemon Algiz。
Patreon特别提到: Space Cruiser,Nikolai Manek,Sam,Chris McCloskey,Rishabh Srivastava,Kalila,Spiking Neurons AB,Khalefa Al-Ahmad,WelcomeToTheClub,Chadd,Lone Striker,Viktor Bowallius,Edmond Seymore,Ai Maven,Chris Smitley,Dave,Alexandros Triantafyllidis,Luke @flexchar,Elle,ya boyyy,Talal Aujan,Alex,Jonathan Leane,Deep Realms,Randy H,subjectnull,Preetika Verma,Joseph William Delisle,Michael Levine,chris gileta,K,Oscar Rangel,LangChain4j,Trenton Dambrowitz,Eugene Pentland,Johann-Peter Hartmann,Femi Adebogun,Illia Dulskyi,senxiiz,Daniel P. Andersen,Sean Connelly,Artur Olbinski,RoA,Mano Prime,Derek Yates,Raven Klaugh,David Flickinger,Willem Michiel,Pieter,Willian Hasse,vamX,Luke Pendergrass,webtim,Ghost,Rainer Wilmers,Nathan LeClaire,Will Dee,Cory Kujawski,John Detwiler,Fred von Graf,biorpg,Iucharbius,Imad Khwaja,Pierre Kircher,terasurfer,Asp the Wyvern,John Villwock,theTransient,zynix,Gabriel Tamborski,Fen Risland,Gabriel Puliatti,Matthew Berman,Pyrater,SuperWojo,Stephen Murray,Karl Bernard,Ajan Kanaga,Greatston Gnanesh,Junyu Yang。
感谢所有慷慨的赞助者和捐赠者!
这是WizardLM-13B V1.1模型的全重量版本。
存储库: https://github.com/nlpxucan/WizardLM
推特: https://twitter.com/WizardLM_AI/status/1677282955490918401