模型:

cambridgeltl/SapBERT-UMLS-2020AB-all-lang-from-XLMR-large

英文

语言:多语言

标签:

  • 生物医学
  • 词汇语义
  • 跨语言

数据集:

  • UMLS

[新闻] SapBERT的跨语言扩展将出现在ACL 2021的主要会议上![新闻] SapBERT将出现在NAACL 2021的会议论文中!

SapBERT-XLMR

使用2020AB作为基础模型,SapBERT (Liu et al. 2021) UMLS 上进行了训练。请使用[CLS]作为输入的表示形式。

从SapBERT提取嵌入

以下脚本将一串字符串(实体名称)转换为嵌入。

import numpy as np
import torch
from tqdm.auto import tqdm
from transformers import AutoTokenizer, AutoModel  

tokenizer = AutoTokenizer.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext")  
model = AutoModel.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext").cuda()

# replace with your own list of entity names
all_names = ["covid-19", "Coronavirus infection", "high fever", "Tumor of posterior wall of oropharynx"] 

bs = 128 # batch size during inference
all_embs = []
for i in tqdm(np.arange(0, len(all_names), bs)):
    toks = tokenizer.batch_encode_plus(all_names[i:i+bs], 
                                       padding="max_length", 
                                       max_length=25, 
                                       truncation=True,
                                       return_tensors="pt")
    toks_cuda = {}
    for k,v in toks.items():
        toks_cuda[k] = v.cuda()
    cls_rep = model(**toks_cuda)[0][:,0,:] # use CLS representation as the embedding
    all_embs.append(cls_rep.cpu().detach().numpy())

all_embs = np.concatenate(all_embs, axis=0)

有关训练和评估的更多详细信息,请参阅SapBERT github repo

引用

@inproceedings{liu2021learning,
    title={Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking},
    author={Liu, Fangyu and Vuli{\'c}, Ivan and Korhonen, Anna and Collier, Nigel},
    booktitle={Proceedings of ACL-IJCNLP 2021},
    month = aug,
    year={2021}
}