模型:
cjvt/t5-sl-small
t5-sl-small model is a Slovene T5 model. It has 8 encoder and 8 decoder layers, in total about 60 million parameters. It was trained for 5 epochs on the following corpora:
The following corpora were used for training the model:
The model is described in detail and evaluated in our paper " Sequence to sequence pretraining for a less-resourced Slovenian language "
2022-07-21: updated with v2 of the model, the old one is still accesible at cjvt/legacy-t5-sl-small . 2022-09-21: added fast tokenizer (Huggingface's TokenizerFast class, the tokenization remains the same)