模型:
sijunhe/nezha-cn-base
请使用与'Bert'相关的分词器类和与'Nezha'相关的模型类
NEZHA: Neural Contextualized Representation for Chinese Language Understanding 魏俊秋,任晓哲,李晓光,黄文勇,廖艺,王亚生,林家树,江鑫,陈晓和刘群
原始检查点可以在 here 找到
from transformers import BertTokenizer, NezhaModel tokenizer = BertTokenizer.from_pretrained('sijunhe/nezha-cn-base') model = NezhaModel.from_pretrained("sijunhe/nezha-cn-base") text = "我爱北京天安门" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input)