模型:

TheBloke/vicuna-AlekseyKorshuk-7B-GPTQ-4bit-128g

中文

Chat & support: my new Discord server

Want to contribute? TheBloke's Patreon page

# Vicuna 7B GPTQ 4-bit 128g

This repository contains Aleksey Korshuk's Vicuna 7B model quantised using GPTQ-for-LLaMa .

Aleksey's model is an alternative to the original Vicuna 7B model . It uses the same ShareGPT source data, but without "ethics filtering".

Other versions available

GETTING GIBBERISH OUTPUT?

Please read the two sections below carefully. You either need to update to the latest QwopQwop GPTQ-for-LLaMa code , or use vicuna-AlekseyKorshuk-7B-GPTQ-4bit-128g.no-act-order.pt .

Provided files

Two model files are provided. Use the safetensors file if you can, otherwise use the pt file.

Details of the files provided:

  • vicuna-AlekseyKorshuk-7B-GPTQ-4bit-128g.safetensors
    • latest safetensors format, with improved file security, created with the latest GPTQ-for-LLaMa code.
    • This has the --act-order GPTQ parameter which should result in a slightly higher inference quality.
    • Command to create:
      • python3 llama.py vicuna-AlekseyKorshuk-7B c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors vicuna-AlekseyKorshuk-7B-GPTQ-4bit-128g.safetensors
  • vicuna-AlekseyKorshuk-7B-GPTQ-4bit-128g.no-act-order.pt
    • pt format file, created without the --act-order flag.
    • This file may have slightly lower quality, but is included as it can be used without needing to use the latest GPTQ-for-LLaMa code.
    • It should hopefully therefore work with one-click-installers on Windows, which include the older GPTQ-for-LLaMa code.
    • Command to create:
      • python3 llama.py vicuna-AlekseyKorshuk-7B c4 --wbits 4 --true-sequential --groupsize 128 --save vicuna-AlekseyKorshuk-7B-GPTQ-4bit-128g.no-act-order.pt

How to run these GPTQ models in text-generation-webui

The safetensors model file was created with --act-order which increases quantisation quality. But this requires that the latest GPTQ-for-LLaMa code is used inside the UI.

If you do not want to, or cannot update to the latest GPTQ code, please use file vicuna-AlekseyKorshuk-7B-GPTQ-4bit-128g.no-act-order.pt .

To use the safetensors model, which should be the highest quality, below are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:

git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
git clone https://github.com/oobabooga/text-generation-webui
mkdir -p text-generation-webui/repositories
ln -s GPTQ-for-LLaMa text-generation-webui/repositories/GPTQ-for-LLaMa

Then install this model into text-generation-webui/models and launch the UI as follows:

cd text-generation-webui
python server.py --model vicuna-AlekseyKorshuk-7B-GPTQ-4bit-128g --wbits 4 --groupsize 128  # add any other command line args you want

The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.

If you are on Windows, or cannot use the Triton branch of GPTQ for any other reason, you can use the CUDA branch instead:

pip uninstall -qy quant_cuda # Uninstall the existing CUDA kernel, if present
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda # Clone the CUDA branch of qwopqwop's GPTQ-for-LLaMa
cd GPTQ-for-LLaMa
python setup_cuda.py install  # Compile and install the CUDA kernel. Requires that a C/C++ compiler is installed.

Then link that into text-generation-webui/repositories as described above.

Or if you can't/won't do all that, just use vicuna-AlekseyKorshuk-7B-GPTQ-4bit-128g.no-act-order.pt which won't require any update to GPTQ-for-LLaMa.

Discord

For further support, and discussions on these models and AI in general, join us at:

TheBloke AI's Discord server

Thanks, and how to contribute.

Thanks to the chirper.ai team!

I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

Patreon special mentions : Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.

Thank you to all my generous patrons and donaters!

Original Vicuna Model Card

Model details

Model type: Vicuna is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. It is an auto-regressive language model, based on the transformer architecture.

Model date: Vicuna was trained between March 2023 and April 2023.

Organizations developing the model: The Vicuna team with members from UC Berkeley, CMU, Stanford, and UC San Diego.

Paper or resources for more information: https://vicuna.lmsys.org/

License: Apache License 2.0

Where to send questions or comments about the model: https://github.com/lm-sys/FastChat/issues

Intended use

Primary intended uses: The primary use of Vicuna is research on large language models and chatbots.

Primary intended users: The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.

Training dataset

70K conversations collected from ShareGPT.com.

Evaluation dataset

A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs. See https://vicuna.lmsys.org/ for more details.