模型:
cambridgeltl/SapBERT-from-PubMedBERT-fulltext-mean-token
语言:en
标签:
数据集:
由 Liu et al. (2020) 训练的SapBERT。使用2020AA(仅英文)进行训练,以 microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext 作为基础模型。请使用输出的均值池化作为表示。
从SapBERT提取嵌入以下脚本将一串字符串(实体名称)转换为嵌入。
import numpy as np import torch from tqdm.auto import tqdm from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext-mean-token") model = AutoModel.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext-mean-token").cuda() # replace with your own list of entity names all_names = ["covid-19", "Coronavirus infection", "high fever", "Tumor of posterior wall of oropharynx"] bs = 128 # batch size during inference all_embs = [] for i in tqdm(np.arange(0, len(all_names), bs)): toks = tokenizer.batch_encode_plus(all_names[i:i+bs], padding="max_length", max_length=25, truncation=True, return_tensors="pt") toks_cuda = {} for k,v in toks.items(): toks_cuda[k] = v.cuda() cls_rep = model(**toks_cuda)[0].mean(1)# use mean pooling representation as the embedding all_embs.append(cls_rep.cpu().detach().numpy()) all_embs = np.concatenate(all_embs, axis=0)
有关训练和评估的更多详细信息,请参阅SapBERT github repo 。
@inproceedings{liu-etal-2021-self, title = "Self-Alignment Pretraining for Biomedical Entity Representations", author = "Liu, Fangyu and Shareghi, Ehsan and Meng, Zaiqiao and Basaldella, Marco and Collier, Nigel", booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jun, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.naacl-main.334", pages = "4228--4238", abstract = "Despite the widespread success of self-supervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal. To address this challenge, we propose SapBERT, a pretraining scheme that self-aligns the representation space of biomedical entities. We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts. In contrast with previous pipeline-based hybrid systems, SapBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets. In the scientific domain, we achieve SOTA even without task-specific supervision. With substantial improvement over various domain-specific pretrained MLMs such as BioBERT, SciBERTand and PubMedBERT, our pretraining scheme proves to be both effective and robust.", }