英文

ProtBert-BFD模型

基于掩码语言建模(MLM)目标对蛋白质序列进行预训练的模型。该模型于 this paper 年提出,并于 this repository 年首次发布。该模型基于大写的氨基酸进行训练:只能处理大写字母的氨基酸。

模型描述

ProtBert-BFD基于Bert模型,在自监督方式下对一大批蛋白质序列进行了预训练。这意味着它仅使用原始蛋白质序列进行了预训练,没有以任何方式人工标记这些序列(这也是为什么它能够使用大量公开数据),使用自动生成输入和标签的自动化过程从这些蛋白质序列中生成。

我们的Bert模型与原始Bert版本之间的一个重要区别是处理序列时将其视为单独的文档。这意味着不使用下一个句子预测,因为每个序列都被视为一个完整的文档。遵循原始Bert训练的方式进行屏蔽,即在输入中随机屏蔽15%的氨基酸。

最终,从该模型提取的特征揭示了基于无标签数据(仅蛋白质序列)的语言模型嵌入捕捉到了控制蛋白质结构的重要生物物理特性。这意味着模型学习了编码在蛋白质序列中生命语言的某些语法规则。

使用目的和局限性

该模型可用于提取蛋白质特征或在下游任务中进行微调。我们注意到,在某些任务中,通过对模型进行微调而不是将其用作特征提取器,可以获得更高的准确性。

如何使用

您可以直接使用此模型进行掩码语言建模流程:

>>> from transformers import BertForMaskedLM, BertTokenizer, pipeline
>>> tokenizer = BertTokenizer.from_pretrained('Rostlab/prot_bert_bfd', do_lower_case=False )
>>> model = BertForMaskedLM.from_pretrained("Rostlab/prot_bert_bfd")
>>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer)
>>> unmasker('D L I P T S S K L V V [MASK] D T S L Q V K K A F F A L V T')

[{'score': 0.1165614128112793,
  'sequence': '[CLS] D L I P T S S K L V V L D T S L Q V K K A F F A L V T [SEP]',
  'token': 5,
  'token_str': 'L'},
 {'score': 0.08976086974143982,
  'sequence': '[CLS] D L I P T S S K L V V V D T S L Q V K K A F F A L V T [SEP]',
  'token': 8,
  'token_str': 'V'},
 {'score': 0.08864385634660721,
  'sequence': '[CLS] D L I P T S S K L V V S D T S L Q V K K A F F A L V T [SEP]',
  'token': 10,
  'token_str': 'S'},
 {'score': 0.06227643042802811,
  'sequence': '[CLS] D L I P T S S K L V V A D T S L Q V K K A F F A L V T [SEP]',
  'token': 6,
  'token_str': 'A'},
 {'score': 0.06194969266653061,
  'sequence': '[CLS] D L I P T S S K L V V T D T S L Q V K K A F F A L V T [SEP]',
  'token': 15,
  'token_str': 'T'}]

以下是在PyTorch中使用此模型获取给定蛋白质序列特征的方法:

from transformers import BertModel, BertTokenizer
import re
tokenizer = BertTokenizer.from_pretrained('Rostlab/prot_bert_bfd', do_lower_case=False )
model = BertModel.from_pretrained("Rostlab/prot_bert_bfd")
sequence_Example = "A E T C Z A O"
sequence_Example = re.sub(r"[UZOB]", "X", sequence_Example)
encoded_input = tokenizer(sequence_Example, return_tensors='pt')
output = model(**encoded_input)

训练数据

ProtBert-BFD模型是在 BFD 训练的,该数据集包含了21亿个蛋白质序列。

训练过程

预处理

蛋白质序列被转换为大写并使用单个空格进行分词,词汇表大小为21。模型的输入形式如下:

[CLS] Protein Sequence A [SEP] Protein Sequence B [SEP]

此外,每个蛋白质序列被视为单独的文档。预处理步骤进行了两次,一次使用组合长度(2个序列)小于512个氨基酸,另一次使用组合长度(2个序列)小于2048个氨基酸。

每个序列的遮罩过程细节遵循原始的Bert模型,具体如下:

  • 15%的氨基酸被屏蔽。
  • 在80%的情况下,被屏蔽的氨基酸被替换为[MASK]。
  • 在10%的情况下,被屏蔽的氨基酸被随机选择(不同于原来的氨基酸)进行替换。
  • 在剩下的10%的情况下,被屏蔽的氨基酸保持原样。

预训练

模型在一个TPU Pod V3-1024上进行了一百万步的训练。512长度的序列进行了80万步(批量大小32k),2048长度的序列进行了20万步(批量大小6k)。使用的优化器是Lamb,学习率为0.002,权重衰减为0.01,学习率预热140k步,学习率线性衰减。

评估结果

在下游任务上进行微调时,该模型的结果如下:

测试结果:

Task/Dataset secondary structure (3-states) secondary structure (8-states) Localization Membrane
CASP12 76 65
TS115 84 73
CB513 83 70
DeepLoc 78 91

BibTeX条目和引用信息

@article {Elnaggar2020.07.12.199554,
    author = {Elnaggar, Ahmed and Heinzinger, Michael and Dallago, Christian and Rehawi, Ghalia and Wang, Yu and Jones, Llion and Gibbs, Tom and Feher, Tamas and Angerer, Christoph and Steinegger, Martin and BHOWMIK, DEBSINDHU and Rost, Burkhard},
    title = {ProtTrans: Towards Cracking the Language of Life{\textquoteright}s Code Through Self-Supervised Deep Learning and High Performance Computing},
    elocation-id = {2020.07.12.199554},
    year = {2020},
    doi = {10.1101/2020.07.12.199554},
    publisher = {Cold Spring Harbor Laboratory},
    abstract = {Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto-regressive language models (Transformer-XL, XLNet) and two auto-encoder models (Bert, Albert) on data from UniRef and BFD containing up to 393 billion amino acids (words) from 2.1 billion protein sequences (22- and 112 times the entire English Wikipedia). The LMs were trained on the Summit supercomputer at Oak Ridge National Laboratory (ORNL), using 936 nodes (total 5616 GPUs) and one TPU Pod (V3-512 or V3-1024). We validated the advantage of up-scaling LMs to larger models supported by bigger data by predicting secondary structure (3-states: Q3=76-84, 8 states: Q8=65-73), sub-cellular localization for 10 cellular compartments (Q10=74) and whether a protein is membrane-bound or water-soluble (Q2=89). Dimensionality reduction revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. The successful up-scaling of protein LMs through HPC to larger data sets slightly reduced the gap between models trained on evolutionary information and LMs. Availability ProtTrans: \<a href="https://github.com/agemagician/ProtTrans"\>https://github.com/agemagician/ProtTrans\</a\>Competing Interest StatementThe authors have declared no competing interest.},
    URL = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554},
    eprint = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554.full.pdf},
    journal = {bioRxiv}
}

Created by Ahmed Elnaggar/@Elnaggar_AI | LinkedIn