模型:
zanelim/singbert-lite-sg
SingBert Lite - 适用于新加坡式英语(SG)和马来西亚式英语(MY)的Bert模型。
类似于 SingBert ,但是是精简版,从 Albert base v2 初始化,经过 singlish 和 manglish 数据的预训练微调。
>>> from transformers import pipeline >>> nlp = pipeline('fill-mask', model='zanelim/singbert-lite-sg') >>> nlp("die [MASK] must try") [{'sequence': '[CLS] die die must try[SEP]', 'score': 0.7731555700302124, 'token': 1327, 'token_str': '▁die'}, {'sequence': '[CLS] die also must try[SEP]', 'score': 0.04763784259557724, 'token': 67, 'token_str': '▁also'}, {'sequence': '[CLS] die still must try[SEP]', 'score': 0.01859409362077713, 'token': 174, 'token_str': '▁still'}, {'sequence': '[CLS] die u must try[SEP]', 'score': 0.015824034810066223, 'token': 287, 'token_str': '▁u'}, {'sequence': '[CLS] die is must try[SEP]', 'score': 0.011271446943283081, 'token': 25, 'token_str': '▁is'}] >>> nlp("dont play [MASK] leh") [{'sequence': '[CLS] dont play play leh[SEP]', 'score': 0.4365769624710083, 'token': 418, 'token_str': '▁play'}, {'sequence': '[CLS] dont play punk leh[SEP]', 'score': 0.06880936771631241, 'token': 6769, 'token_str': '▁punk'}, {'sequence': '[CLS] dont play game leh[SEP]', 'score': 0.051739856600761414, 'token': 250, 'token_str': '▁game'}, {'sequence': '[CLS] dont play games leh[SEP]', 'score': 0.045703962445259094, 'token': 466, 'token_str': '▁games'}, {'sequence': '[CLS] dont play around leh[SEP]', 'score': 0.013458190485835075, 'token': 140, 'token_str': '▁around'}] >>> nlp("catch no [MASK]") [{'sequence': '[CLS] catch no ball[SEP]', 'score': 0.6197211146354675, 'token': 1592, 'token_str': '▁ball'}, {'sequence': '[CLS] catch no balls[SEP]', 'score': 0.08441998809576035, 'token': 7152, 'token_str': '▁balls'}, {'sequence': '[CLS] catch no joke[SEP]', 'score': 0.0676785409450531, 'token': 8186, 'token_str': '▁joke'}, {'sequence': '[CLS] catch no?[SEP]', 'score': 0.040638409554958344, 'token': 60, 'token_str': '?'}, {'sequence': '[CLS] catch no one[SEP]', 'score': 0.03546864539384842, 'token': 53, 'token_str': '▁one'}] >>> nlp("confirm plus [MASK]") [{'sequence': '[CLS] confirm plus chop[SEP]', 'score': 0.9608421921730042, 'token': 17144, 'token_str': '▁chop'}, {'sequence': '[CLS] confirm plus guarantee[SEP]', 'score': 0.011784233152866364, 'token': 9120, 'token_str': '▁guarantee'}, {'sequence': '[CLS] confirm plus confirm[SEP]', 'score': 0.010571340098977089, 'token': 10265, 'token_str': '▁confirm'}, {'sequence': '[CLS] confirm plus egg[SEP]', 'score': 0.0033525123726576567, 'token': 6387, 'token_str': '▁egg'}, {'sequence': '[CLS] confirm plus bet[SEP]', 'score': 0.0008760977652855217, 'token': 5676, 'token_str': '▁bet'}]
使用此模型获取给定文本特征的方法如下:
from transformers import AlbertTokenizer, AlbertModel tokenizer = AlbertTokenizer.from_pretrained('zanelim/singbert-lite-sg') model = AlbertModel.from_pretrained("zanelim/singbert-lite-sg") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input)
使用 TensorFlow:
from transformers import AlbertTokenizer, TFAlbertModel tokenizer = AlbertTokenizer.from_pretrained("zanelim/singbert-lite-sg") model = TFAlbertModel.from_pretrained("zanelim/singbert-lite-sg") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input)限制和偏见
该模型是在口语化的新加坡式英语和马来西亚式英语语料库上进行微调的,因此最适合用于涉及主要语言(英语、华语、马来语)的下游任务。由于训练数据主要来自论坛,因此请注意现有的固有偏见。
口语化的新加坡式英语和马来西亚式英语语料库(包括英语、华语、泰米尔语、马来语以及福建话、广东话或潮州话等地方方言的混合语言)。语料库来源于 Reddit 的 r/singapore 和 r/malaysia 子论坛,以及像 hardwarezone 这样的论坛。
使用初始化的 albert base v2 词汇表和检查点(预训练权重)。
使用以下超参数对预训练进行微调: