英文

目录

  • 模型摘要
  • 用途
  • 限制
  • 训练
  • 评估
  • 引文
  • 模型摘要

    我们呈现了BLOOMZ & mT0,这是一系列能够零-shot地遵循人类指令的模型,可用于多种语言。我们对BLOOM & mT5预训练的多语言语言模型进行微调,使用我们的跨语言任务混合(xP3),发现我们的模型能够在未见过的任务和语言上进行跨语言泛化。

    Multitask finetuned on 1239321 . Recommended for prompting in English.
    Parameters 300M 580M 1.2B 3.7B 13B 560M 1.1B 1.7B 3B 7.1B 176B
    Finetuned Model 12310321 12311321 12312321 12313321 12314321 12315321 12316321 12317321 12318321 12319321 12320321
    Multitask finetuned on 12321321 . Recommended for prompting in non-English.
    Finetuned Model 12322321 12323321 12324321
    Multitask finetuned on 12325321 . Released for research purposes only. Strictly inferior to above models!
    Finetuned Model 12326321 12327321 12328321
    Original pretrained checkpoints. Not recommended.
    Pretrained Model 12329321 12330321 12331321 12332321 12333321 12334321 12335321 12336321 12337321 12338321 12339321

    用途

    预期使用

    我们建议使用该模型执行用自然语言表达的任务。例如,给定提示“Translate to English: Je t’aime.”,该模型最可能回答“I love you.”。我们论文中的一些提示点子:

    • 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
    • Suggest at least five related search terms to "Mạng neural nhân tạo".
    • Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
    • Explain in a sentence in Telugu what is backpropagation in neural networks.

    欢迎在社区选项卡中分享您的生成结果!

    使用方法

    CPU

    展开
    # pip install -q transformers
    from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
    
    checkpoint = "bigscience/mt0-xxl-p3"
    
    tokenizer = AutoTokenizer.from_pretrained(checkpoint)
    model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
    
    inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt")
    outputs = model.generate(inputs)
    print(tokenizer.decode(outputs[0]))
    

    GPU

    展开
    # pip install -q transformers accelerate
    from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
    
    checkpoint = "bigscience/mt0-xxl-p3"
    
    tokenizer = AutoTokenizer.from_pretrained(checkpoint)
    model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
    
    inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
    outputs = model.generate(inputs)
    print(tokenizer.decode(outputs[0]))
    

    8位GPU

    展开
    # pip install -q transformers accelerate bitsandbytes
    from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
    
    checkpoint = "bigscience/mt0-xxl-p3"
    
    tokenizer = AutoTokenizer.from_pretrained(checkpoint)
    model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
    
    inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
    outputs = model.generate(inputs)
    print(tokenizer.decode(outputs[0]))
    

    限制

    提示工程:性能可能因提示而异。对于BLOOMZ模型,我们建议明确指示何时输入结束,以避免模型尝试继续输入。例如,不带句号(.)的提示“Translate to English: Je t'aime”可能导致模型试图继续法语句子。更好的提示是“Translate to English: Je t'aime.”,“Translate to English: Je t'aime. Translation:”、“What is "Je t'aime." in English?”等,这样对于模型何时回答是清楚的。此外,我们建议尽可能为模型提供更多上下文。例如,如果您想让它用泰卢固语回答,请告诉模型,例如“用泰卢固语用一句话解释神经网络中的反向传播是什么。”。

    训练

    模型

    • 架构:与 mt5-xxl 相同,还请参阅config.json文件
    • 微调步骤:7000
    • 微调标记:129亿
    • 精度:bfloat16

    硬件

    • TPU:TPUv4-256

    软件

    评估

    有关在未知任务上的零-shot结果,请参阅我们的 paper bigscience/evaluation-results 的表7。侧边栏报告了每个数据集配置的最佳提示的零-shot性能。

    引文

    @misc{muennighoff2022crosslingual,
          title={Crosslingual Generalization through Multitask Finetuning}, 
          author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
          year={2022},
          eprint={2211.01786},
          archivePrefix={arXiv},
          primaryClass={cs.CL}
    }