模型:
google/rembert
Pretrained RemBERT model on 110 languages using a masked language modeling (MLM) objective. It was introduced in the paper Rethinking embedding coupling in pre-trained language models . A direct export of the model checkpoint was first made available in this repository . This version of the checkpoint is lightweight since it is meant to be finetuned for classification and excludes the output embedding weights.
RemBERT's main difference with mBERT is that the input and output embeddings are not tied. Instead, RemBERT uses small input embeddings and larger output embeddings. This makes the model more efficient since the output embeddings are discarded during fine-tuning. It is also more accurate, especially when reinvesting the input embeddings' parameters into the core model, as is done on RemBERT.
You should fine-tune this model for your downstream task. It is meant to be a general-purpose model, similar to mBERT. In our paper , we have successfully applied this model to tasks such as classification, question answering, NER, POS-tagging. For tasks such as text generation you should look at models like GPT2.
The RemBERT model was pretrained on multilingual Wikipedia data over 110 languages. The full language list is on this repository
@inproceedings{DBLP:conf/iclr/ChungFTJR21, author = {Hyung Won Chung and Thibault F{\'{e}}vry and Henry Tsai and Melvin Johnson and Sebastian Ruder}, title = {Rethinking Embedding Coupling in Pre-trained Language Models}, booktitle = {9th International Conference on Learning Representations, {ICLR} 2021, Virtual Event, Austria, May 3-7, 2021}, publisher = {OpenReview.net}, year = {2021}, url = {https://openreview.net/forum?id=xpFFI\_NtgpW}, timestamp = {Wed, 23 Jun 2021 17:36:39 +0200}, biburl = {https://dblp.org/rec/conf/iclr/ChungFTJR21.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }