模型:

TheBloke/WizardCoder-15B-1.0-GPTQ

中文

Chat & support: my new Discord server

Want to contribute? TheBloke's Patreon page

WizardLM's WizardCoder 15B 1.0 GPTQ

These files are GPTQ 4bit model files for WizardLM's WizardCoder 15B 1.0 .

It is the result of quantising to 4bit using AutoGPTQ .

Repositories available

Prompt template

Below is an instruction that describes a task. Write a response that appropriately completes the request

### Instruction: prompt

### Response:

How to easily download and use this model in text-generation-webui

Please make sure you're using the latest version of text-generation-webui

  • Click the Model tab .
  • Under Download custom model or LoRA , enter TheBloke/WizardCoder-15B-1.0-GPTQ .
  • Click Download .
  • The model will start downloading. Once it's finished it will say "Done"
  • In the top left, click the refresh icon next to Model .
  • In the Model dropdown, choose the model you just downloaded: WizardCoder-15B-1.0-GPTQ
  • The model will automatically load, and is now ready for use!
  • If you want any custom settings, set them and then click Save settings for this model followed by Reload the Model in the top right.
    • Note that you do not need to set GPTQ parameters any more. These are set automatically from the file quantize_config.json .
  • Once you're ready, click the Text Generation tab and enter a prompt to get started!
  • How to use this GPTQ model from Python code

    First make sure you have AutoGPTQ installed:

    pip install auto-gptq

    Then try the following example code:

    from transformers import AutoTokenizer, pipeline, logging
    from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
    import argparse
    
    model_name_or_path = "TheBloke/WizardCoder-15B-1.0-GPTQ"
    # Or to load it locally, pass the local download path
    # model_name_or_path = "/path/to/models/TheBloke_WizardCoder-15B-1.0-GPTQ"
    
    use_triton = False
    
    tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
    
    model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
            use_safetensors=True,
            device="cuda:0",
            use_triton=use_triton,
            quantize_config=None)
    
    # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
    logging.set_verbosity(logging.CRITICAL)
    
    pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
    
    prompt_template = '''Below is an instruction that describes a task. Write a response that appropriately completes the request
    
    ### Instruction: {prompt}
    
    ### Response:'''
    prompt = prompt_template.format(prompt="How do I sort a list in Python?")
    
    outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.2, top_k=50, top_p=0.95)
    
    print(outputs[0]['generated_text'])
    

    Provided files

    gptq_model-4bit--1g.safetensors

    This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.

    It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible.

    • gptq_model-4bit--1g.safetensors
      • Works with AutoGPTQ in CUDA or Triton modes.
      • Works with text-generation-webui, including one-click-installers.
      • Does not work with GPTQ-for-LLaMa.
      • Parameters: Groupsize = -1. Act Order / desc_act = True.

    Discord

    For further support, and discussions on these models and AI in general, join us at:

    TheBloke AI's Discord server

    Thanks, and how to contribute.

    Thanks to the chirper.ai team!

    I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

    If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

    Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

    Special thanks to : Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.

    Patreon special mentions : Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.

    Thank you to all my generous patrons and donaters!

    Original model card: WizardLM's WizardCoder 15B 1.0

    This is the Full-Weight of WizardCoder.

    Repository : https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder

    Twitter : https://twitter.com/WizardLM_AI/status/1669109414559911937

    Paper : Is coming, with brand-new Evol+ methods for code LLMs.

    Demos (Only support code-related English instructions now.) :

    Demo , Backup Demo1 , Backup Demo2 , Backup Demo3

    WizardCoder: Empowering Code Large Language Models with Evol-Instruct

    To develop our WizardCoder model, we begin by adapting the Evol-Instruct method specifically for coding tasks. This involves tailoring the prompt to the domain of code-related instructions. Subsequently, we fine-tune the Code LLM, StarCoder, utilizing the newly created instruction-following training set.

    News

    Comparing WizardCoder with the Closed-Source Models.

    ? The following figure shows that our WizardCoder attains the third position in this benchmark , surpassing Claude-Plus (59.8 vs. 53.0) and Bard (59.8 vs. 44.5). Notably, our model exhibits a substantially smaller size compared to these models.

    ❗ Note: In this study, we copy the scores for HumanEval and HumanEval+ from the LLM-Humaneval-Benchmarks . Notably, all the mentioned models generate code solutions for each problem utilizing a single attempt , and the resulting pass rate percentage is reported. Our WizardCoder generates answers using greedy decoding and tests with the same code .

    Comparing WizardCoder with the Open-Source Models.

    The following table clearly demonstrates that our WizardCoder exhibits a substantial performance advantage over all the open-source models. ❗ If you are confused with the different scores of our model (57.3 and 59.8), please check the Notes.

    Model HumanEval Pass@1 MBPP Pass@1
    CodeGen-16B-Multi 18.3 20.9
    CodeGeeX 22.9 24.4
    LLaMA-33B 21.7 30.2
    LLaMA-65B 23.7 37.7
    PaLM-540B 26.2 36.8
    PaLM-Coder-540B 36.0 47.0
    PaLM 2-S 37.6 50.0
    CodeGen-16B-Mono 29.3 35.3
    Code-Cushman-001 33.5 45.9
    StarCoder-15B 33.6 43.6*
    InstructCodeT5+ 35.0 --
    WizardLM-30B 1.0 37.8 --
    WizardCoder-15B 1.0 57.3 51.8

    ❗ Note: The reproduced result of StarCoder on MBPP.

    ❗ Note: The above table conducts a comprehensive comparison of our WizardCoder with other models on the HumanEval and MBPP benchmarks. We adhere to the approach outlined in previous studies by generating 20 samples for each problem to estimate the pass@1 score and evaluate with the same code . The scores of GPT4 and GPT3.5 reported by OpenAI are 67.0 and 48.1 (maybe these are the early version GPT4&3.5).

    Call for Feedbacks

    We welcome everyone to use your professional and difficult instructions to evaluate WizardCoder, and show us examples of poor performance and your suggestions in the issue discussion area. We are focusing on improving the Evol-Instruct now and hope to relieve existing weaknesses and issues in the the next version of WizardCoder. After that, we will open the code and pipeline of up-to-date Evol-Instruct algorithm and work with you together to improve it.

    Contents

  • Online Demo

  • Fine-tuning

  • Inference

  • Evaluation

  • Citation

  • Disclaimer

  • Online Demo

    We will provide our latest models for you to try for as long as possible. If you find a link is not working, please try another one. At the same time, please try as many real-world and challenging code-related problems that you encounter in your work and life as possible. We will continue to evolve our models with your feedbacks.

    Fine-tuning

    We fine-tune WizardCoder using the modified code train.py from Llama-X . We fine-tune StarCoder-15B with the following hyperparameters:

    Hyperparameter StarCoder-15B
    Batch size 512
    Learning rate 2e-5
    Epochs 3
    Max length 2048
    Warmup step 30
    LR scheduler cosine

    To reproduce our fine-tuning of WizardCoder, please follow the following steps:

  • According to the instructions of Llama-X , install the environment, download the training code, and deploy. (Note: deepspeed==0.9.2 and transformers==4.29.2 )
  • Replace the train.py with the train_wizardcoder.py in our repo ( src/train_wizardcoder.py )
  • Login Huggingface:
  • huggingface-cli login
    
  • Execute the following training command:
  • deepspeed train_wizardcoder.py \
        --model_name_or_path "bigcode/starcoder" \
        --data_path "/your/path/to/code_instruction_data.json" \
        --output_dir "/your/path/to/ckpt" \
        --num_train_epochs 3 \
        --model_max_length 2048 \
        --per_device_train_batch_size 16 \
        --per_device_eval_batch_size 1 \
        --gradient_accumulation_steps 4 \
        --evaluation_strategy "no" \
        --save_strategy "steps" \
        --save_steps 50 \
        --save_total_limit 2 \
        --learning_rate 2e-5 \
        --warmup_steps 30 \
        --logging_steps 2 \
        --lr_scheduler_type "cosine" \
        --report_to "tensorboard" \
        --gradient_checkpointing True \
        --deepspeed configs/deepspeed_config.json \
        --fp16 True
    

    Inference

    We provide the decoding script for WizardCoder, which reads a input file and generates corresponding responses for each sample, and finally consolidates them into an output file.

    You can specify base_model , input_data_path and output_data_path in src\inference_wizardcoder.py to set the decoding model, path of input file and path of output file.

    pip install jsonlines
    

    The decoding command is:

    python src\inference_wizardcoder.py \
        --base_model "/your/path/to/ckpt" \
        --input_data_path "/your/path/to/input/data.jsonl" \
        --output_data_path "/your/path/to/output/result.jsonl"
    

    The format of data.jsonl should be:

    {"idx": 11, "Instruction": "Write a Python code to count 1 to 10."}
    {"idx": 12, "Instruction": "Write a Jave code to sum 1 to 10."}
    

    The prompt for our WizardCoder in src\inference_wizardcoder.py is:

    Below is an instruction that describes a task. Write a response that appropriately completes the request.
    
    ### Instruction:
    {instruction}
    
    ### Response:
    

    Evaluation

    We provide the evaluation script on HumanEval for WizardCoder.

  • According to the instructions of HumanEval , install the environment.
  • Run the following script to generate the answer.
  • model="/path/to/your/model"
    temp=0.2
    max_len=2048
    pred_num=200
    num_seqs_per_iter=2
    
    output_path=preds/T${temp}_N${pred_num}
    
    mkdir -p ${output_path}
    echo 'Output path: '$output_path
    echo 'Model to eval: '$model
    
    # 164 problems, 21 per GPU if GPU=8
    index=0
    gpu_num=8
    for ((i = 0; i < $gpu_num; i++)); do
      start_index=$((i * 21))
      end_index=$(((i + 1) * 21))
    
      gpu=$((i))
      echo 'Running process #' ${i} 'from' $start_index 'to' $end_index 'on GPU' ${gpu}
      ((index++))
      (
        CUDA_VISIBLE_DEVICES=$gpu python humaneval_gen.py --model ${model} \
          --start_index ${start_index} --end_index ${end_index} --temperature ${temp} \
          --num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path}
      ) &
      if (($index % $gpu_num == 0)); then wait; fi
    done
    
  • Run the post processing code src/process_humaneval.py to collect the code completions from all answer files.
  • output_path=preds/T${temp}_N${pred_num}
    
    echo 'Output path: '$output_path
    python process_humaneval.py --path ${output_path} --out_path ${output_path}.jsonl --add_prompt
    
    evaluate_functional_correctness ${output_path}.jsonl
    

    Citation

    Please cite the repo if you use the data or code in this repo.

    @misc{luo2023wizardcoder,
          title={WizardCoder: Empowering Code Large Language Models with Evol-Instruct}, 
          author={Ziyang Luo and Can Xu and Pu Zhao and Qingfeng Sun and Xiubo Geng and Wenxiang Hu and Chongyang Tao and Jing Ma and Qingwei Lin and Daxin Jiang},
          year={2023},
    }
    

    Disclaimer

    The resources, including code, data, and model weights, associated with this project are restricted for academic research purposes only and cannot be used for commercial purposes. The content produced by any version of WizardCoder is influenced by uncontrollable variables such as randomness, and therefore, the accuracy of the output cannot be guaranteed by this project. This project does not accept any legal liability for the content of the model output, nor does it assume responsibility for any losses incurred due to the use of associated resources and output results.