模型:
KoichiYasuoka/roberta-base-chinese
This is a RoBERTa model pre-trained on Chinese Wikipedia texts (both simplified and traditional). NVIDIA A100-SXM4-40GB took 48 hours 56 minutes for training. You can fine-tune roberta-base-chinese for downstream tasks, such as POS-tagging , dependency-parsing , and so on.
from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-chinese") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-chinese")