中文

BLOOM LM

BigScience Large Open-science Open-access Multilingual Language Model

Model Card

Version 1.0 / 26.May.2022

Table of Contents

  • Model Details
  • Uses
  • Training Data
  • Risks and Limitations
  • Evaluation
  • Recommendations
  • Glossary and Calculations
  • More Information
  • Model Card Authors
  • Model Details

    Basics

    This section provides information for anyone who wants to know about the model.

    Click to expand

    Developed by: BigScience ( website )

    • All collaborators are either volunteers or have an agreement with their employer. (Further breakdown of participants forthcoming.)

    Model Type: Transformer-based Language Model

    Version: 1.0.0

    Languages: Multiple; see training data

    License: RAIL License v1.0 ( link )

    Release Date Estimate: Monday, 11.July.2022

    Send Questions to: bigscience-contact@googlegroups.com

    Cite as: BigScience, BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model . International, May 2021-May 2022

    Funded by:

    • The French government.

    • Hugging Face ( website ).

    • Organizations of contributors. (Further breakdown of organizations forthcoming.)

    Technical Specifications

    This section provides information for people who work on model development.

    Click to expand

    Please see the BLOOM training README for full details on replicating training.

    Model Architecture: Modified from Megatron-LM GPT2 (see paper , BLOOM Megatron code ):

    • Decoder-only architecture

    • Layer normalization applied to word embeddings layer ( StableEmbedding ; see code , paper )

    • ALiBI positional encodings (see paper ), with GeLU activation functions

    • 3,002,557,440 parameters:

      • 642,252,800 embedding parameters

      • 30 layers, 32 attention heads

      • Hidden layers are 2560-dimensional

      • Sequence length of 2048 tokens used (see BLOOM tokenizer , tokenizer description )

    Objective Function: Cross Entropy with mean reduction (see API documentation ).

    Compute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement ).

    • Hardware: 384 A100 80GB GPUs (48 nodes):

      • Additional 32 A100 80GB GPUs (4 nodes) in reserve

      • 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links

      • CPU: AMD

      • CPU memory: 512GB per node

      • GPU memory: 640GB per node

      • Inter-node connect: Omni-Path Architecture (OPA)

      • NCCL-communications network: a fully dedicated subnet

      • Disc IO network: shared network with other types of nodes

    • Software:

    Training

    Training logs: Tensorboard link

    • Number of epochs: 1 ( current target )

    • Dates:

      • Started 11th March, 2022 11:42am PST

      • Ended 5th July, 2022

    • Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments)

    • Server training location: Île-de-France, France

    Tokenization

    The BLOOM tokenizer ( link ) is a learned subword tokenizer trained using:

    • A byte-level Byte Pair Encoding (BPE) algorithm

    • A simple pre-tokenization rule, no normalization

    • A vocabulary size of 250,680

    It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.

    Environmental Impact

    Click to expand

    The training supercomputer, Jean Zay ( website ), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.

    Estimated carbon emissions: (Forthcoming upon completion of training.)

    Estimated electricity usage: (Forthcoming upon completion of training.)

    Uses

    This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.

    Click to expand

    Intended Use

    This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.

    Direct Use
    • Text generation

    • Exploring characteristics of language generated by a language model

      • Examples: Cloze tests, counterfactuals, generations with reframings
    Downstream Use
    • Tasks that leverage language models include: Information Extraction, Question Answering, Summarization

    Misuse and Out-of-scope Use

    This section addresses what users ought not do with the model.

    See the BLOOM License , Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.

    Out-of-scope Uses

    Using the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.

    Out-of-scope Uses Include:
    • Usage in biomedical domains, political and legal domains, or finance domains

    • Usage for evaluating or scoring individuals, such as for employment, education, or credit

    • Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct

    Misuse

    Intentionally using the model for harm, violating human rights , or other kinds of malicious activities, is a misuse of this model. This includes:

    • Spam generation

    • Disinformation and influence operations

    • Disparagement and defamation

    • Harassment and abuse

    • Deception

    • Unconsented impersonation and imitation

    • Unconsented surveillance

    • Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions

    Intended Users

    Direct Users
    • General Public

    • Researchers

    • Students

    • Educators

    • Engineers/developers

    • Non-commercial entities

    • Community advocates, including human and civil rights groups

    Indirect Users Others Affected (Parties Prenantes)
    • People and groups referred to by the LLM

    • People and groups exposed to outputs of, or decisions based on, the LLM

    • People and groups whose original work is included in the LLM

    Training Data

    This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.

    Click to expand

    Details for each dataset are provided in individual Data Cards .

    Training data includes:

    • 45 natural languages

    • 12 programming languages

    • In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.)

    Languages

    The pie chart shows the distribution of languages in training data.

    The following table shows the further distribution of Niger-Congo and Indic languages in the training data.

    Click to expand
    Niger Congo Percentage Indic Percentage
    Chi Tumbuka 0.00002 Assamese 0.01
    Kikuyu 0.00004 Odia 0.04
    Bambara 0.00004 Gujarati 0.04
    Akan 0.00007 Marathi 0.05
    Xitsonga 0.00007 Punjabi 0.05
    Sesotho 0.00007 Kannada 0.06
    Chi Chewa 0.0001 Nepali 0.07
    Setswana 0.0002 Telugu 0.09
    Northern Sotho 0.0002 Malayalam 0.10
    Fon 0.0002 Urdu 0.10
    Kirundi 0.0003 Tamil 0.20
    Wolof 0.0004 Bengali 0.50
    Kuganda 0.0004 Hindi 0.70
    Chi Shona 0.001
    Isi Zulu 0.001
    Igbo 0.001
    Xhosa 0.001
    Kinyarwanda 0.003
    Yoruba 0.006
    Swahili 0.02

    The following table shows the distribution of programming languages.

    Click to expand
    Extension Language Number of files
    java Java 5,407,724
    php PHP 4,942,186
    cpp C++ 2,503,930
    py Python 2,435,072
    js JavaScript 1,905,518
    cs C# 1,577,347
    rb Ruby 6,78,413
    cc C++ 443,054
    hpp C++ 391,048
    lua Lua 352,317
    go GO 227,763
    ts TypeScript 195,254
    C C 134,537
    scala Scala 92,052
    hh C++ 67,161
    H C++ 55,899
    tsx TypeScript 33,107
    rs Rust 29,693
    phpt PHP 9,702
    c++ C++ 1,342
    h++ C++ 791
    php3 PHP 540
    phps PHP 270
    php5 PHP 166
    php4 PHP 29

    Risks and Limitations

    This section identifies foreseeable harms and misunderstandings.

    Click to expand

    Model may:

    • Overrepresent some viewpoints and underrepresent others

    • Contain stereotypes

    • Contain personal information

    • Generate:

      • Hateful, abusive, or violent language

      • Discriminatory or prejudicial language

      • Content that may not be appropriate for all settings, including sexual content

    • Make errors, including producing incorrect information as if it were factual

    • Generate irrelevant or repetitive outputs

    Evaluation

    This section describes the evaluation protocols and provides the results.

    Click to expand

    Metrics

    This section describes the different ways performance is calculated and why.

    Includes:

    Metric Why chosen
    Perplexity Standard metric for quantifying model improvements during training
    Cross Entropy Loss Standard objective for language models.

    And multiple different metrics for specific tasks. (More evaluation metrics forthcoming upon completion of evaluation protocol.)

    Factors

    This section lists some different aspects of BLOOM models. Its focus is on aspects that are likely to give rise to high variance in model behavior.

    • Language, such as English or Yoruba

    • Domain, such as newswire or stories

    • Demographic characteristics, such as gender or nationality

    Results

    Results are based on the Factors and Metrics .

    Zero-shot evaluations:

    See this repository for JSON files: https://github.com/bigscience-workshop/evaluation-results

    Task Language Metric BLOOM-2B5
    arc_challenge eng acc ↑ 0.28
    arc_easy eng acc ↑ 0.595
    axb (Median of 10 prompts) eng acc ↑ 0.443
    axg (Median of 10 prompts) eng acc ↑ 0.5
    boolq (Median of 11 prompts) eng acc ↑ 0.617
    cb (Median of 15 prompts) eng acc ↑ 0.304
    cola (Median of 5 prompts) eng acc ↑ 0.611
    copa (Median of 9 prompts) eng acc ↑ 0.63
    crows_pairs_english (Median of 6 prompts) eng acc ↑ 0.497
    crows_pairs_french (Median of 7 prompts) fra acc ↑ 0.503
    diabla (Median of 2 prompts) eng acc ↑ 0.289
    gsarti/flores_101_afr afr byte_perplexity ↓ 6.501
    gsarti/flores_101_amh amh byte_perplexity ↓ 3.973
    gsarti/flores_101_ara ara byte_perplexity ↓ 1.808
    gsarti/flores_101_asm asm byte_perplexity ↓ 5.699
    gsarti/flores_101_ast ast byte_perplexity ↓ 3.925
    gsarti/flores_101_azj azj byte_perplexity ↓ 6.943
    gsarti/flores_101_bel bel byte_perplexity ↓ 3.614
    gsarti/flores_101_ben ben byte_perplexity ↓ 5.121
    gsarti/flores_101_bos bos byte_perplexity ↓ 5.653
    gsarti/flores_101_bul bul byte_perplexity ↓ 2.701
    gsarti/flores_101_cat cat byte_perplexity ↓ 2.305
    gsarti/flores_101_ceb ceb byte_perplexity ↓ 6.291
    gsarti/flores_101_ces ces byte_perplexity ↓ 5.447
    gsarti/flores_101_ckb ckb byte_perplexity ↓ 3.726
    gsarti/flores_101_cym cym byte_perplexity ↓ 12.539
    gsarti/flores_101_dan dan byte_perplexity ↓ 5.183
    gsarti/flores_101_deu deu byte_perplexity ↓ 3.118
    gsarti/flores_101_ell ell byte_perplexity ↓ 2.468
    gsarti/flores_101_eng eng byte_perplexity ↓ 2.019
    gsarti/flores_101_est est byte_perplexity ↓ 9.117
    gsarti/flores_101_fas fas byte_perplexity ↓ 3.058
    gsarti/flores_101_fin fin byte_perplexity ↓ 6.847
    gsarti/flores_101_fra fra byte_perplexity ↓ 1.998
    gsarti/flores_101_ful ful byte_perplexity ↓ 11.466
    gsarti/flores_101_gle gle byte_perplexity ↓ 8.681
    gsarti/flores_101_glg glg byte_perplexity ↓ 3.03
    gsarti/flores_101_guj guj byte_perplexity ↓ 4.955
    gsarti/flores_101_hau hau byte_perplexity ↓ 10.758
    gsarti/flores_101_heb heb byte_perplexity ↓ 3.6
    gsarti/flores_101_hin hin byte_perplexity ↓ 4.713
    gsarti/flores_101_hrv hrv byte_perplexity ↓ 5.822
    gsarti/flores_101_hun hun byte_perplexity ↓ 6.44
    gsarti/flores_101_hye hye byte_perplexity ↓ 3.658
    gsarti/flores_101_ibo ibo byte_perplexity ↓ 5.565
    gsarti/flores_101_ind ind byte_perplexity ↓ 2.16
    gsarti/flores_101_isl isl byte_perplexity ↓ 8.082
    gsarti/flores_101_ita ita byte_perplexity ↓ 2.969
    gsarti/flores_101_jav jav byte_perplexity ↓ 7.057
    gsarti/flores_101_jpn jpn byte_perplexity ↓ 2.776
    gsarti/flores_101_kam kam byte_perplexity ↓ 11.073
    gsarti/flores_101_kan kan byte_perplexity ↓ 5.552
    gsarti/flores_101_kat kat byte_perplexity ↓ 2.523
    gsarti/flores_101_kaz kaz byte_perplexity ↓ 3.39
    gsarti/flores_101_kea kea byte_perplexity ↓ 8.919
    gsarti/flores_101_kir kir byte_perplexity ↓ 3.729
    gsarti/flores_101_kor kor byte_perplexity ↓ 3.933
    gsarti/flores_101_lao lao byte_perplexity ↓ 2.908
    gsarti/flores_101_lav lav byte_perplexity ↓ 7.777
    gsarti/flores_101_lin lin byte_perplexity ↓ 7.525
    gsarti/flores_101_lit lit byte_perplexity ↓ 7.369
    gsarti/flores_101_ltz ltz byte_perplexity ↓ 8.801
    gsarti/flores_101_lug lug byte_perplexity ↓ 8.483
    gsarti/flores_101_luo luo byte_perplexity ↓ 11.976
    gsarti/flores_101_mal mal byte_perplexity ↓ 4.616
    gsarti/flores_101_mar mar byte_perplexity ↓ 5.483
    gsarti/flores_101_mkd mkd byte_perplexity ↓ 2.966
    gsarti/flores_101_mlt mlt byte_perplexity ↓ 15.005
    gsarti/flores_101_mon mon byte_perplexity ↓ 3.411
    gsarti/flores_101_mri mri byte_perplexity ↓ 7.474
    gsarti/flores_101_msa msa byte_perplexity ↓ 2.571
    gsarti/flores_101_mya mya byte_perplexity ↓ 2.414
    gsarti/flores_101_nld nld byte_perplexity ↓ 4.128
    gsarti/flores_101_nob nob byte_perplexity ↓ 5.403
    gsarti/flores_101_npi npi byte_perplexity ↓ 5.199
    gsarti/flores_101_nso nso byte_perplexity ↓ 8.155
    gsarti/flores_101_nya nya byte_perplexity ↓ 8.18
    gsarti/flores_101_oci oci byte_perplexity ↓ 4.862
    gsarti/flores_101_orm orm byte_perplexity ↓ 12.912
    gsarti/flores_101_ory ory byte_perplexity ↓ 5.189
    gsarti/flores_101_pan pan byte_perplexity ↓ 4.698
    gsarti/flores_101_pol pol byte_perplexity ↓ 4.626
    gsarti/flores_101_por por byte_perplexity ↓ 1.975
    gsarti/flores_101_pus pus byte_perplexity ↓ 4.496
    gsarti/flores_101_ron ron byte_perplexity ↓ 4.965
    gsarti/flores_101_rus rus byte_perplexity ↓ 2.05
    gsarti/flores_101_slk slk byte_perplexity ↓ 6.451
    gsarti/flores_101_slv slv byte_perplexity ↓ 6.62
    gsarti/flores_101_sna sna byte_perplexity ↓ 8.462
    gsarti/flores_101_snd snd byte_perplexity ↓ 5.466
    gsarti/flores_101_som som byte_perplexity ↓ 11.959
    gsarti/flores_101_spa spa byte_perplexity ↓ 1.897
    gsarti/flores_101_srp srp byte_perplexity ↓ 2.871
    gsarti/flores_101_swe swe byte_perplexity ↓ 5.055
    gsarti/flores_101_swh swh byte_perplexity ↓ 3.697
    gsarti/flores_101_tam tam byte_perplexity ↓ 4.539
    gsarti/flores_101_tel tel byte_perplexity ↓ 5.807
    gsarti/flores_101_tgk tgk byte_perplexity ↓ 3.599
    gsarti/flores_101_tgl tgl byte_perplexity ↓ 5.667
    gsarti/flores_101_tha tha byte_perplexity ↓ 2.366
    gsarti/flores_101_tur tur byte_perplexity ↓ 4.885
    gsarti/flores_101_ukr ukr byte_perplexity ↓ 2.724
    gsarti/flores_101_umb umb byte_perplexity ↓ 12.767
    gsarti/flores_101_urd urd byte_perplexity ↓ 1.98
    gsarti/flores_101_uzb uzb byte_perplexity ↓ 12.002
    gsarti/flores_101_vie vie byte_perplexity ↓ 1.766
    gsarti/flores_101_wol wol byte_perplexity ↓ 9.144
    gsarti/flores_101_xho xho byte_perplexity ↓ 7.403
    gsarti/flores_101_yor yor byte_perplexity ↓ 5.913
    gsarti/flores_101_zho_simpl zho_simpl byte_perplexity ↓ 2.277
    gsarti/flores_101_zho_trad zho_trad byte_perplexity ↓ 2.518
    gsarti/flores_101_zul zul byte_perplexity ↓ 8.534
    headqa esp acc ↑ 0.264
    hellaswag eng acc ↑ 0.412
    logiqa eng acc ↑ 0.207
    mathqa eng acc ↑ 0.25
    mc_taco eng em ↑ 0.119
    mnli (Median of 15 prompts) eng acc ↑ 0.355
    mnli_mismatched (Median of 15 prompts) eng acc ↑ 0.352
    mrpc eng acc ↑ 0.586
    multirc (Median of 11 prompts) eng acc ↑ 0.538
    openbookqa eng acc ↑ 0.216
    piqa eng acc ↑ 0.708
    prost eng acc ↑ 0.227
    pubmedqa eng acc ↑ 0.616
    qnli eng acc ↑ 0.507
    qqp (Median of 7 prompts) eng acc ↑ 0.384
    race eng acc ↑ 0.352
    rte (Median of 6 prompts) eng acc ↑ 0.477
    sciq eng acc ↑ 0.892
    sst (Median of 6 prompts) eng acc ↑ 0.518
    triviaqa eng acc ↑ 0.042
    tydiqa_primary (Median of 24 prompts) eng acc ↑ 0.301
    webqs eng acc ↑ 0.017
    wic (Median of 11 prompts) eng acc ↑ 0.502
    winogrande eng acc ↑ 0.586
    wnli (Median of 6 prompts) eng acc ↑ 0.472
    wsc (Median of 11 prompts) eng acc ↑ 0.442
    humaneval python pass@1 ↑ 0.155
    humaneval python pass@10 ↑ 0.322
    humaneval python pass@100 ↑ 0.555

    Train-time Evaluation:

    As of 25.May.2022, 15:00 PST:

    • Training Loss: 2.0

    • Validation Loss: 2.2

    • Perplexity: 8.9

    Recommendations

    This section provides information on warnings and potential mitigations.

    Click to expand
    • Indirect users should be made aware when the content they're working with is created by the LLM.

    • Users should be aware of Risks and Limitations , and include an appropriate age disclaimer or blocking interface as necessary.

    • Models pretrained with the LLM should include an updated Model Card.

    • Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.

    Glossary and Calculations

    This section defines common terms and how metrics are calculated.

    Click to expand

    More Information

    Click to expand

    Dataset Creation

    Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling

    Technical Specifications

    Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours

    More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml

    Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model

    Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml

    Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss

    Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md

    Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md

    Initial Results

    Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book

    Model Card Authors

    Ordered roughly chronologically and by amount of time spent.

    Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff