模型:
ethzanalytics/dolly-v2-12b-sharded
这是databricks/dolly-v2-12b模型的分片检查点(具有约4GB的分片)。有关所有详细信息,请参阅 original model 。
安装transformers,accelerate和bitsandbytes。
pip install -U -q transformers bitsandbytes accelerate
以8位加载模型,然后进行 run inference :
from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "ethzanalytics/dolly-v2-12b-sharded" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, load_in_8bit=True, device_map="auto", )