英文

AbLang重链模型

这是AbLang的?版本:一种用于抗体的语言模型。它在 this paper 中被提出,并在 this repository 中首次发布。该模型是基于大写氨基酸进行训练的:它只能处理大写字母的氨基酸。

预期用途和限制

该模型可用于蛋白质特征提取,也可用于在下游任务上进行微调(待定)。

使用方法

这是如何在PyTorch中使用该模型获取给定抗体序列的特征:

from transformers import AutoModel, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('qilowoq/AbLang_heavy')
model = AutoModel.from_pretrained('qilowoq/AbLang_heavy', trust_remote_code=True)

sequence_Example = ' '.join("EVQLQESGPGLVKPSETLSLTCTVSGGPINNAYWTWIRQPPGKGLEYLGYVYHTGVTNYNPSLKSRLTITIDTSRKQLSLSLKFVTAADSAVYYCAREWAEDGDFGNAFHVWGQGTMVAVSSASTKGPSVFPLAPSSKSTSGGTAALGCL")
encoded_input = tokenizer(sequence_Example, return_tensors='pt')
model_output = model(**encoded_input)

序列嵌入可以通过以下方式生成:

def get_sequence_embeddings(encoded_input, model_output):
    mask = encoded_input['attention_mask'].float()
    d = {k: v for k, v in torch.nonzero(mask).cpu().numpy()} # dict of sep tokens
    # make sep token invisible
    for i in d:
        mask[i, d[i]] = 0
    mask[:, 0] = 0.0 # make cls token invisible
    mask = mask.unsqueeze(-1).expand(model_output.last_hidden_state.size())
    sum_embeddings = torch.sum(model_output.last_hidden_state * mask, 1)
    sum_mask = torch.clamp(mask.sum(1), min=1e-9)
    return sum_embeddings / sum_mask

seq_embeds = get_sequence_embeddings(encoded_input, model_output)

微调

为了节省内存,我们建议使用 LoRA

pip install git+https://github.com/huggingface/peft.git
pip install loralib

LoRA极大地减少了可训练参数的数量,并且在与微调完整模型相比表现相当甚至更好。

from peft import LoraConfig, get_peft_model

def apply_lora_bert(model):
    config = LoraConfig(
        r=8, lora_alpha=32, 
        lora_dropout=0.3,
        target_modules=['query', 'value']
    )
    for param in model.parameters():
        param.requires_grad = False  # freeze the model - train adapters later
        if param.ndim == 1:
        # cast the small parameters (e.g. layernorm) to fp32 for stability
            param.data = param.data.to(torch.float32)
    model.gradient_checkpointing_enable()  # reduce number of stored activations
    model.enable_input_require_grads()
    model = get_peft_model(model, config)
    return model

model = apply_lora_bert(model)

model.print_trainable_parameters()
# trainable params: 294912 || all params: 85493760 || trainable%: 0.3449514911965505

引用

@article{Olsen2022,
  title={AbLang: An antibody language model for completing antibody sequences},
  author={Tobias H. Olsen, Iain H. Moal and Charlotte M. Deane},
  journal={bioRxiv},
  doi={https://doi.org/10.1101/2022.01.20.477061},
  year={2022}
}