模型:
allegro/herbert-klej-cased-tokenizer-v1
HerBERT 分词器是一个字符级字节对编码,词汇量为50k个标记。该分词器是在 Wolne Lektury 和公开可用的 National Corpus of Polish 子集上使用 fastBPE 库进行训练的。分词器使用了来自 transformers 的 XLMTokenizer 实现。
HerBERT分词器应与 HerBERT model 一起使用:
from transformers import XLMTokenizer, RobertaModel tokenizer = XLMTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1") model = RobertaModel.from_pretrained("allegro/herbert-klej-cased-v1") encoded_input = tokenizer.encode("Kto ma lepszą sztukę, ma lepszy rząd – to jasne.", return_tensors='pt') outputs = model(encoded_input)
CC BY-SA 4.0
如果您使用了这个分词器,请引用以下论文:
@inproceedings{rybak-etal-2020-klej, title = "{KLEJ}: Comprehensive Benchmark for {P}olish Language Understanding", author = "Rybak, Piotr and Mroczkowski, Robert and Tracz, Janusz and Gawlik, Ireneusz", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.111", doi = "10.18653/v1/2020.acl-main.111", pages = "1191--1201", }
分词器由 Allegro Machine Learning Research 团队创建。
您可以通过邮件联系我们:klejbenchmark@allegro.pl