中文

Chat & support: my new Discord server

Want to contribute? TheBloke's Patreon page

Bigcode's Starcoder GPTQ

These files are GPTQ 4bit model files for Bigcode's Starcoder .

It is the result of quantising to 4bit using AutoGPTQ .

Repositories available

Prompting

The model was trained on GitHub code.

As such it is not an instruction model and commands like "Write a function that computes the square root." do not work well.

However, by using the Tech Assistant prompt you can turn it into a capable technical assistant.

How to easily download and use this model in text-generation-webui

Please make sure you're using the latest version of text-generation-webui

  • Click the Model tab .
  • Under Download custom model or LoRA , enter TheBloke/starcoder-GPTQ .
  • Click Download .
  • The model will start downloading. Once it's finished it will say "Done"
  • In the top left, click the refresh icon next to Model .
  • In the Model dropdown, choose the model you just downloaded: starcoder-GPTQ
  • The model will automatically load, and is now ready for use!
  • If you want any custom settings, set them and then click Save settings for this model followed by Reload the Model in the top right.
    • Note that you do not need to set GPTQ parameters any more. These are set automatically from the file quantize_config.json .
  • Once you're ready, click the Text Generation tab and enter a prompt to get started!
  • How to use this GPTQ model from Python code

    First make sure you have AutoGPTQ installed:

    pip install auto-gptq

    Then try the following example code:

    from transformers import AutoTokenizer, pipeline, logging
    from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
    import argparse
    
    model_name_or_path = "TheBloke/starcoder-GPTQ"
    
    use_triton = False
    
    tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
    
    model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
            model_basename=model_basename,
            use_safetensors=True,
            trust_remote_code=True,
            device="cuda:0",
            use_triton=use_triton,
            quantize_config=None)
    
    inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
    outputs = model.generate(inputs)
    print(tokenizer.decode(outputs[0]))
    

    Provided files

    gptq_model-4bit--1g.safetensors

    This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.

    It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible.

    • gptq_model-4bit--1g.safetensors
      • Works with AutoGPTQ in CUDA or Triton modes.
      • Does not work with GPTQ-for-LLaMa.
      • Works with text-generation-webui, including one-click-installers.
      • Parameters: Groupsize = -1. Act Order / desc_act = True.

    Discord

    For further support, and discussions on these models and AI in general, join us at:

    TheBloke AI's Discord server

    Thanks, and how to contribute.

    Thanks to the chirper.ai team!

    I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

    If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

    Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

    Special thanks to : Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.

    Patreon special mentions : Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.

    Thank you to all my generous patrons and donaters!

    Original model card: Bigcode's Starcoder

    StarCoder

    Play with the model on the StarCoder Playground .

    Table of Contents

  • Model Summary
  • Use
  • Limitations
  • Training
  • License
  • Citation
  • Model Summary

    The StarCoder models are 15.5B parameter models trained on 80+ programming languages from The Stack (v1.2) , with opt-out requests excluded. The model uses Multi Query Attention , a context window of 8192 tokens , and was trained using the Fill-in-the-Middle objective on 1 trillion tokens.

    Use

    Intended use

    The model was trained on GitHub code. As such it is not an instruction model and commands like "Write a function that computes the square root." do not work well. However, by using the Tech Assistant prompt you can turn it into a capable technical assistant.

    Feel free to share your generations in the Community tab!

    Generation

    # pip install -q transformers
    from transformers import AutoModelForCausalLM, AutoTokenizer
    
    checkpoint = "bigcode/starcoder"
    device = "cuda" # for GPU usage or "cpu" for CPU usage
    
    tokenizer = AutoTokenizer.from_pretrained(checkpoint)
    model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
    
    inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
    outputs = model.generate(inputs)
    print(tokenizer.decode(outputs[0]))
    

    Fill-in-the-middle

    Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:

    input_text = "<fim_prefix>def print_hello_world():\n    <fim_suffix>\n    print('Hello world!')<fim_middle>"
    inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
    outputs = model.generate(inputs)
    print(tokenizer.decode(outputs[0]))
    

    Attribution & Other Requirements

    The pretraining dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a search index that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.

    Limitations

    The model has been trained on source code from 80+ programming languages. The predominant natural language in source code is English although other languages are also present. As such the model is capable of generating code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits. See the paper for an in-depth discussion of the model limitations.

    Training

    Model

    • Architecture: GPT-2 model with multi-query attention and Fill-in-the-Middle objective
    • Pretraining steps: 250k
    • Pretraining tokens: 1 trillion
    • Precision: bfloat16

    Hardware

    • GPUs: 512 Tesla A100
    • Training time: 24 days

    Software

    License

    The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement here .

    Citation

    @article{li2023starcoder,
          title={StarCoder: may the source be with you!}, 
          author={Raymond Li and Loubna Ben Allal and Yangtian Zi and Niklas Muennighoff and Denis Kocetkov and Chenghao Mou and Marc Marone and Christopher Akiki and Jia Li and Jenny Chim and Qian Liu and Evgenii Zheltonozhskii and Terry Yue Zhuo and Thomas Wang and Olivier Dehaene and Mishig Davaadorj and Joel Lamy-Poirier and João Monteiro and Oleh Shliazhko and Nicolas Gontier and Nicholas Meade and Armel Zebaze and Ming-Ho Yee and Logesh Kumar Umapathi and Jian Zhu and Benjamin Lipkin and Muhtasham Oblokulov and Zhiruo Wang and Rudra Murthy and Jason Stillerman and Siva Sankalp Patel and Dmitry Abulkhanov and Marco Zocca and Manan Dey and Zhihan Zhang and Nour Fahmy and Urvashi Bhattacharyya and Wenhao Yu and Swayam Singh and Sasha Luccioni and Paulo Villegas and Maxim Kunakov and Fedor Zhdanov and Manuel Romero and Tony Lee and Nadav Timor and Jennifer Ding and Claire Schlesinger and Hailey Schoelkopf and Jan Ebert and Tri Dao and Mayank Mishra and Alex Gu and Jennifer Robinson and Carolyn Jane Anderson and Brendan Dolan-Gavitt and Danish Contractor and Siva Reddy and Daniel Fried and Dzmitry Bahdanau and Yacine Jernite and Carlos Muñoz Ferrandis and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries},
          year={2023},
          eprint={2305.06161},
          archivePrefix={arXiv},
          primaryClass={cs.CL}
    }