模型:
TheBloke/guanaco-33B-GPTQ
Chat & support: my new Discord server
Want to contribute? TheBloke's Patreon page
这些文件是 Tim Dettmers' Guanaco 33B 的 GPTQ 模型文件。
提供了多个 GPTQ 参数的排列组合;有关提供的选项、它们的参数以及用于创建它们的软件的详细信息,请参见下方的提供的文件。
这些模型是使用由 Latitude.sh 慷慨提供的硬件进行量子化的。
### Human: {prompt} ### Assistant:
提供了多个量子化参数,可供您根据硬件和需求选择最佳参数。
每个单独的量子化位于不同的分支中。有关从不同分支获取的说明,请参见下方。
Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
---|---|---|---|---|---|---|---|
main | 4 | None | True | 16.94 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
gptq-4bit-32g-actorder_True | 4 | 32 | True | 19.44 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
gptq-4bit-64g-actorder_True | 4 | 64 | True | 18.18 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
gptq-4bit-128g-actorder_True | 4 | 128 | True | 17.55 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
gptq-8bit--1g-actorder_True | 8 | None | True | 32.99 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
gptq-8bit-128g-actorder_False | 8 | 128 | False | 33.73 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
gptq-3bit--1g-actorder_True | 3 | None | True | 12.92 GB | False | AutoGPTQ | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
gptq-3bit-128g-actorder_False | 3 | 128 | False | 13.51 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/guanaco-33B-GPTQ`
请确保您正在使用最新版本的 text-generation-webui 。
强烈建议使用 text-generation-webui 的一键安装程序,除非您知道如何进行手动安装。
首先确保已安装 AutoGPTQ :
GITHUB_ACTIONS=true pip install auto-gptq
然后尝试以下示例代码:
from transformers import AutoTokenizer, pipeline, logging from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig model_name_or_path = "TheBloke/guanaco-33B-GPTQ" model_basename = "guanaco-33b-GPTQ-4bit--1g.act.order" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, model_basename=model_basename use_safetensors=True, trust_remote_code=False, device="cuda:0", use_triton=use_triton, quantize_config=None) """ To download from a specific branch, use the revision parameter, as in this example: model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, revision="gptq-4bit-32g-actorder_True", model_basename=model_basename, use_safetensors=True, trust_remote_code=False, device="cuda:0", quantize_config=None) """ prompt = "Tell me about AI" prompt_template=f'''### Human: {prompt} ### Assistant: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline # Prevent printing spurious transformers error when using pipeline with AutoGPTQ logging.set_verbosity(logging.CRITICAL) print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text'])
提供的文件可与 AutoGPTQ(CUDA 和 Triton 模式)、GPTQ-for-LLaMa(仅 CUDA 已经过测试)和 Occ4m 的 GPTQ-for-LLaMa 分支一起使用。
ExLlama 可与4位 Llama 模型一起工作。有关每个文件的兼容性,请参见上方的提供的文件表。
如需进一步支持以及有关这些模型和人工智能的讨论,请加入我们:
感谢 chirper.ai 团队!
我收到很多人的询问,是否可以进行贡献。我喜欢提供模型和帮助他人,并非常愿意能够投入更多时间进行这些工作,同时还可以扩展到新的项目,如微调/训练。
如果您能够并且愿意进行贡献,我将非常感激并且将有助于我继续提供更多模型,并开始进行新的人工智能项目。
捐赠者将在所有关于 AI/LLM/模型的问题和请求上获得优先支持,可以访问私人 Discord 房间,并享受其他福利。
特别感谢:来自 CarbonQuill 的 Luke,Aemon Algiz。
Patreon 特别提及:Space Cruiser、Nikolai Manek、Sam、Chris McCloskey、Rishabh Srivastava、Kalila、Spiking Neurons AB、Khalefa Al-Ahmad、WelcomeToTheClub、Chadd、Lone Striker、Viktor Bowallius、Edmond Seymore、Ai Maven、Chris Smitley、Dave、Alexandros Triantafyllidis、Luke @flexchar、Elle、ya boyyy、Talal Aujan、Alex、Jonathan Leane、Deep Realms、Randy H、subjectnull、Preetika Verma、Joseph William Delisle、Michael Levine、chris gileta、K、Oscar Rangel、LangChain4j、Trenton Dambrowitz、Eugene Pentland、Johann-Peter Hartmann、Femi Adebogun、Illia Dulskyi、senxiiz、Daniel P. Andersen、Sean Connelly、Artur Olbinski、RoA、Mano Prime、Derek Yates、Raven Klaugh、David Flickinger、Willem Michiel、Pieter、Willian Hasse、vamX、Luke Pendergrass、webtim、Ghost、Rainer Wilmers、Nathan LeClaire、Will Dee、Cory Kujawski、John Detwiler、Fred von Graf、biorpg、Iucharbius、Imad Khwaja、Pierre Kircher、terasurfer、Asp the Wyvern、John Villwock、theTransient、zynix、Gabriel Tamborski、Fen Risland、Gabriel Puliatti、Matthew Berman、Pyrater、SuperWojo、Stephen Murray、Karl Bernard、Ajan Kanaga、Greatston Gnanesh、Junyu Yang。
感谢所有慷慨的赞助者和捐赠者!
未提供原始模型卡片。