模型:
ethzanalytics/stablelm-tuned-alpha-3b-sharded
这是模型的分片检查点(带有约2GB的分片)。有关所有详细信息,请参考 original model 。
安装transformers,accelerate和bitsandbytes。
pip install -U -q transformers bitsandbytes accelerate
以8位加载模型,然后 run inference :
from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "ethzanalytics/stablelm-tuned-alpha-3b-sharded" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, load_in_8bit=True)