模型:
bert-large-uncased-whole-word-masking
预训练该模型在英语上使用了遮蔽语言建模(MLM)目标。它在 this paper 中被引入,并在 this repository 首次发布。此模型是无大小写区分的:它不能区分english和English。
与其他BERT模型不同,该模型使用了一种新的技术:整词遮蔽。在这种情况下,与一个词相关的所有标记都会一次性被遮蔽。总体遮蔽率保持不变。
训练是相同的——对每个遮蔽的WordPiece标记进行独立预测。
声明:发布BERT的团队没有为这个模型编写模型卡片,因此该模型卡片是由Hugging Face团队编写的。
BERT是一个以自我监督方式预训练的transformers模型,预训练在大型英文语料库上。这意味着它仅在原始文本上进行了预训练,没有人类以任何方式对其进行标注(这就是它可以使用大量公开可用数据的原因),并通过自动过程生成输入和标签。更准确地说,它通过两个目标进行预训练:
通过这种方式,模型学习了英语语言的内部表示,可以用于提取对下游任务有用的特征:如果你有一个带标签的句子数据集,你可以使用BERT模型生成的特征作为输入来训练标准的分类器。
此模型具有以下配置:
你可以直接使用原始模型进行蒙版语言建模或下一句预测,但它主要用于在下游任务上进行微调。请参阅 model hub 以寻找您感兴趣的任务上微调的版本。
请注意,该模型主要用于基于整个句子(可能遮蔽)进行决策的任务,例如序列分类、标记分类或问答。对于文本生成等任务,您应该考虑像GPT2这样的模型。
您可以使用此模型直接进行蒙版语言建模的流程:
>>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-large-uncased-whole-word-masking') >>> unmasker("Hello I'm a [MASK] model.") [ { 'sequence': "[CLS] hello i'm a fashion model. [SEP]", 'score': 0.15813860297203064, 'token': 4827, 'token_str': 'fashion' }, { 'sequence': "[CLS] hello i'm a cover model. [SEP]", 'score': 0.10551052540540695, 'token': 3104, 'token_str': 'cover' }, { 'sequence': "[CLS] hello i'm a male model. [SEP]", 'score': 0.08340442180633545, 'token': 3287, 'token_str': 'male' }, { 'sequence': "[CLS] hello i'm a super model. [SEP]", 'score': 0.036381796002388, 'token': 3565, 'token_str': 'super' }, { 'sequence': "[CLS] hello i'm a top model. [SEP]", 'score': 0.03609578311443329, 'token': 2327, 'token_str': 'top' } ]
以下是如何使用此模型在PyTorch中获取给定文本的特征:
from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-large-uncased-whole-word-masking') model = BertModel.from_pretrained("bert-large-uncased-whole-word-masking") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input)
以下是如何在TensorFlow中使用此模型:
from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('bert-large-uncased-whole-word-masking') model = TFBertModel.from_pretrained("bert-large-uncased-whole-word-masking") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input)
即使此模型的训练数据可以被视为相当中立,该模型可能会产生偏见的预测:
>>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-large-uncased-whole-word-masking') >>> unmasker("The man worked as a [MASK].") [ { "sequence":"[CLS] the man worked as a waiter. [SEP]", "score":0.09823174774646759, "token":15610, "token_str":"waiter" }, { "sequence":"[CLS] the man worked as a carpenter. [SEP]", "score":0.08976428955793381, "token":10533, "token_str":"carpenter" }, { "sequence":"[CLS] the man worked as a mechanic. [SEP]", "score":0.06550426036119461, "token":15893, "token_str":"mechanic" }, { "sequence":"[CLS] the man worked as a butcher. [SEP]", "score":0.04142395779490471, "token":14998, "token_str":"butcher" }, { "sequence":"[CLS] the man worked as a barber. [SEP]", "score":0.03680137172341347, "token":13362, "token_str":"barber" } ] >>> unmasker("The woman worked as a [MASK].") [ { "sequence":"[CLS] the woman worked as a waitress. [SEP]", "score":0.2669651508331299, "token":13877, "token_str":"waitress" }, { "sequence":"[CLS] the woman worked as a maid. [SEP]", "score":0.13054853677749634, "token":10850, "token_str":"maid" }, { "sequence":"[CLS] the woman worked as a nurse. [SEP]", "score":0.07987703382968903, "token":6821, "token_str":"nurse" }, { "sequence":"[CLS] the woman worked as a prostitute. [SEP]", "score":0.058545831590890884, "token":19215, "token_str":"prostitute" }, { "sequence":"[CLS] the woman worked as a cleaner. [SEP]", "score":0.03834161534905434, "token":20133, "token_str":"cleaner" } ]
这种偏见也会影响该模型的所有微调版本。
BERT模型在 BookCorpus 上预训练,该数据集包含11,038本未发表的书籍和 English Wikipedia (不包括列表、表格和标题)。
文本使用WordPiece进行小写处理和分词,词汇表大小为30,000。模型的输入形式如下:
[CLS] Sentence A [SEP] Sentence B [SEP]
根据0.5的概率,句子A和句子B对应于原始文本中的两个连续句子,在其他情况下,它是语料库中的另一个随机句子。需要注意的是,这里所指的句子是一个连续的文本范围,通常比一个句子更长。唯一的限制是两个“句子”的结果的长度总和小于512个标记。
每个句子的遮蔽过程的细节如下:
该模型在4个云TPU(总共16个TPU芯片)上以Pod配置进行一百万次步骤的训练,批量大小为256。对于90%的步骤,序列长度限制为128个标记,而对于剩下的10%的步骤,序列长度限制为512个标记。使用Adam优化器,学习率为1e-4,β1 = 0.9,β2 = 0.999,权重衰减为0.01,在前10000个步骤进行学习率预热,并在之后线性衰减学习率。
在下游任务上进行微调时,该模型实现了以下结果:
Model | SQUAD 1.1 F1/EM | Multi NLI Accuracy |
---|---|---|
BERT-Large, Uncased (Whole Word Masking) | 92.8/86.7 | 87.07 |
@article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }