模型:
optimum/distilbert-base-uncased-finetuned-banking77
任务:
文本分类许可:
apache-2.0This model is a fine-tuned version of distilbert-base-uncased on the banking77 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
---|---|---|---|---|---|
No log | 1.0 | 126 | 1.1457 | 0.7896 | 0.7685 |
No log | 2.0 | 252 | 0.4673 | 0.8906 | 0.8889 |
No log | 3.0 | 378 | 0.3488 | 0.9150 | 0.9151 |
0.9787 | 4.0 | 504 | 0.3238 | 0.9180 | 0.9179 |
0.9787 | 5.0 | 630 | 0.3126 | 0.9225 | 0.9226 |