模型:
albert-large-v1
预训练模型,使用掩码语言模型(MLM)目标在英语语言上。它是在 this paper 年提出的,并在 this repository 年首次发布。与所有ALBERT模型一样,该模型是无大小写的:它不区分英文和English。
免责声明:发布ALBERT的团队没有为该模型撰写模型卡片,所以这个模型卡片是由Hugging Face团队撰写的。
ALBERT是一个在大规模英文数据上进行自我监督预训练的transformers模型。这意味着它仅基于原始文本进行预训练,并没有以任何方式由人类进行标记(这就是为什么它可以使用大量的公开可用数据),对这些文本进行输入和标签的自动生成。更确切地说,它训练了两个目标:
通过这种方式,模型学习了英语语言的一个内部表示,然后可以用于提取对下游任务有用的特征:例如,如果你有一个标记的句子数据集,你可以使用ALBERT模型生成的特征作为输入来训练一个标准分类器。
ALBERT独特之处在于它在其Transformer中共享其层。因此,所有层都具有相同的权重。重复层导致了较小的内存占用量,但计算成本仍然与具有相同数量隐藏层的BERT类似的架构相似,因为它必须遍历相同数量的(重复的)层。
这是大模型的第一个版本。版本2由于不同的丢失率、额外的训练数据和更长的训练时间而与版本1不同。它在几乎所有下游任务中都有更好的结果。
此模型具有以下配置:
您可以将原始模型用于掩码语言模型或下一句预测,但它主要用于在下游任务上进行微调。查看 model hub 以查找您感兴趣的任务的微调版本。
请注意,该模型主要用于在使用整个(可能掩码的)句子进行决策的任务上进行微调,例如序列分类、标记分类或问题回答。对于文本生成等任务,你应该参考像GPT2这样的模型。
您可以直接使用该模型进行掩码语言建模的流程:
>>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-large-v1') >>> unmasker("Hello I'm a [MASK] model.") [ { "sequence":"[CLS] hello i'm a modeling model.[SEP]", "score":0.05816134437918663, "token":12807, "token_str":"â–modeling" }, { "sequence":"[CLS] hello i'm a modelling model.[SEP]", "score":0.03748830780386925, "token":23089, "token_str":"â–modelling" }, { "sequence":"[CLS] hello i'm a model model.[SEP]", "score":0.033725276589393616, "token":1061, "token_str":"â–model" }, { "sequence":"[CLS] hello i'm a runway model.[SEP]", "score":0.017313428223133087, "token":8014, "token_str":"â–runway" }, { "sequence":"[CLS] hello i'm a lingerie model.[SEP]", "score":0.014405295252799988, "token":29104, "token_str":"â–lingerie" } ]
以下是如何在PyTorch中使用该模型获取给定文本的特征:
from transformers import AlbertTokenizer, AlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-large-v1') model = AlbertModel.from_pretrained("albert-large-v1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input)
以及在TensorFlow中:
from transformers import AlbertTokenizer, TFAlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-large-v1') model = TFAlbertModel.from_pretrained("albert-large-v1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input)
即使此模型使用的训练数据可以被认为是相当中立的,但该模型可能会产生偏见预测:
>>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-large-v1') >>> unmasker("The man worked as a [MASK].") [ { "sequence":"[CLS] the man worked as a chauffeur.[SEP]", "score":0.029577180743217468, "token":28744, "token_str":"â–chauffeur" }, { "sequence":"[CLS] the man worked as a janitor.[SEP]", "score":0.028865724802017212, "token":29477, "token_str":"â–janitor" }, { "sequence":"[CLS] the man worked as a shoemaker.[SEP]", "score":0.02581118606030941, "token":29024, "token_str":"â–shoemaker" }, { "sequence":"[CLS] the man worked as a blacksmith.[SEP]", "score":0.01849772222340107, "token":21238, "token_str":"â–blacksmith" }, { "sequence":"[CLS] the man worked as a lawyer.[SEP]", "score":0.01820771023631096, "token":3672, "token_str":"â–lawyer" } ] >>> unmasker("The woman worked as a [MASK].") [ { "sequence":"[CLS] the woman worked as a receptionist.[SEP]", "score":0.04604868218302727, "token":25331, "token_str":"â–receptionist" }, { "sequence":"[CLS] the woman worked as a janitor.[SEP]", "score":0.028220869600772858, "token":29477, "token_str":"â–janitor" }, { "sequence":"[CLS] the woman worked as a paramedic.[SEP]", "score":0.0261906236410141, "token":23386, "token_str":"â–paramedic" }, { "sequence":"[CLS] the woman worked as a chauffeur.[SEP]", "score":0.024797942489385605, "token":28744, "token_str":"â–chauffeur" }, { "sequence":"[CLS] the woman worked as a waitress.[SEP]", "score":0.024124596267938614, "token":13678, "token_str":"â–waitress" } ]
这种偏见也会影响该模型的所有微调版本。
ALBERT模型是在 BookCorpus 上进行预训练的,这是一个由11,038本未发表的书以及 English Wikipedia (不包括列表、表格和标题)组成的数据集。
这些文本使用SentencePiece进行小写和分词,词汇量大小为30,000。模型的输入形式如下:
[CLS] Sentence A [SEP] Sentence B [SEP]
ALBERT预训练过程遵循BERT的设置。
每个句子的遮罩过程的细节如下:
在下游任务上进行微调时,ALBERT模型取得了以下结果:
Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE | |
---|---|---|---|---|---|---|
V2 | ||||||
ALBERT-base | 82.3 | 90.2/83.2 | 82.1/79.3 | 84.6 | 92.9 | 66.8 |
ALBERT-large | 85.7 | 91.8/85.2 | 84.9/81.8 | 86.5 | 94.9 | 75.2 |
ALBERT-xlarge | 87.9 | 92.9/86.4 | 87.9/84.1 | 87.9 | 95.4 | 80.7 |
ALBERT-xxlarge | 90.9 | 94.6/89.1 | 89.8/86.9 | 90.6 | 96.8 | 86.8 |
V1 | ||||||
ALBERT-base | 80.1 | 89.3/82.3 | 80.0/77.1 | 81.6 | 90.3 | 64.0 |
ALBERT-large | 82.4 | 90.6/83.9 | 82.3/79.4 | 83.5 | 91.7 | 68.5 |
ALBERT-xlarge | 85.5 | 92.5/86.1 | 86.1/83.1 | 86.4 | 92.4 | 74.8 |
ALBERT-xxlarge | 91.0 | 94.8/89.3 | 90.2/87.4 | 90.8 | 96.9 | 86.5 |
@article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }