英文

layoutlmv3-finetuned-funsd

This model is a fine-tuned version of microsoft/layoutlmv3-base on the nielsr/funsd-layoutlmv3 dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1164
  • Precision: 0.9026
  • Recall: 0.913
  • F1: 0.9078
  • Accuracy: 0.8330

The script for training can be found here: https://github.com/huggingface/transformers/tree/main/examples/research_projects/layoutlmv3

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 1000

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
No log 10.0 100 0.5238 0.8366 0.886 0.8606 0.8410
No log 20.0 200 0.6930 0.8751 0.8965 0.8857 0.8322
No log 30.0 300 0.7784 0.8902 0.908 0.8990 0.8414
No log 40.0 400 0.9056 0.8916 0.905 0.8983 0.8364
0.2429 50.0 500 1.0016 0.8954 0.9075 0.9014 0.8298
0.2429 60.0 600 1.0097 0.8899 0.897 0.8934 0.8294
0.2429 70.0 700 1.0722 0.9035 0.9085 0.9060 0.8315
0.2429 80.0 800 1.0884 0.8905 0.9105 0.9004 0.8269
0.2429 90.0 900 1.1292 0.8938 0.909 0.9013 0.8279
0.0098 100.0 1000 1.1164 0.9026 0.913 0.9078 0.8330

Framework versions

  • Transformers 4.19.0.dev0
  • Pytorch 1.11.0+cu113
  • Datasets 2.0.0
  • Tokenizers 0.11.6
这个模型是nielsr/funsd-layoutlmv3数据集上 microsoft/layoutlmv3-base 的微调版本。它在评估集上取得了以下结果: 损失: 1.1164, 精确率: 0.9026, 召回率: 0.913, F1值: 0.9078, 准确率: 0.8330。训练脚本可以在这里找到: https://github.com/huggingface/transformers/tree/main/examples/research_projects/layoutlmv3 。 模型描述: 需要更多信息。 预期用途和限制: 需要更多信息。 训练和评估数据: 需要更多信息。 训练过程: 训练超参数: 训练过程中使用了以下超参数: 学习率: 1e-05 训练批次大小: 16 评估批次大小: 16 种子: 42 优化器: Adam, beta=(0.9,0.999),epsilon=1e-08 学习率调度器类型: 线性 训练步骤: 1000 训练结果:
Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
No log 10.0 100 0.5238 0.8366 0.886 0.8606 0.8410
No log 20.0 200 0.6930 0.8751 0.8965 0.8857 0.8322
No log 30.0 300 0.7784 0.8902 0.908 0.8990 0.8414
No log 40.0 400 0.9056 0.8916 0.905 0.8983 0.8364
0.2429 50.0 500 1.0016 0.8954 0.9075 0.9014 0.8298
0.2429 60.0 600 1.0097 0.8899 0.897 0.8934 0.8294
0.2429 70.0 700 1.0722 0.9035 0.9085 0.9060 0.8315
0.2429 80.0 800 1.0884 0.8905 0.9105 0.9004 0.8269
0.2429 90.0 900 1.1292 0.8938 0.909 0.9013 0.8279
0.0098 100.0 1000 1.1164 0.9026 0.913 0.9078 0.8330
框架版本: Transformers 4.19.0.dev0 Pytorch 1.11.0+cu113 Datasets 2.0.0 Tokenizers 0.11.6