模型:
sijunhe/nezha-base-wwm
请使用'Bert'相关的分词器类和'Nezha'相关的模型类
NEZHA: Neural Contextualized Representation for Chinese Language Understanding 吴军秋,任晓哲,李晓光,黄文勇,廖毅,王亚声,林家树,江欣,陈晓和刘群。
可以在 here 找到原始检查点
from transformers import BertTokenizer, NezhaModel tokenizer = BertTokenizer.from_pretrained("sijunhe/nezha-base-wwm") model = NezhaModel.from_pretrained("sijunhe/nezha-base-wwm") text = "我爱北京天安门" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input)