File size: 4,722 Bytes
8a026de |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 |
---
library_name: transformers
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-large
tags:
- generated_from_trainer
datasets:
- mp-02/cord
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-large-cord2
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mp-02/cord
type: mp-02/cord
metrics:
- name: Precision
type: precision
value: 0.9810074318744839
- name: Recall
type: recall
value: 0.9842584921292461
- name: F1
type: f1
value: 0.9826302729528537
- name: Accuracy
type: accuracy
value: 0.9817017383348582
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-large-cord2
This model is a fine-tuned version of [microsoft/layoutlmv3-large](https://huggingface.co/microsoft/layoutlmv3-large) on the mp-02/cord dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1205
- Precision: 0.9810
- Recall: 0.9843
- F1: 0.9826
- Accuracy: 0.9817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.25 | 100 | 0.4870 | 0.8359 | 0.8691 | 0.8522 | 0.8518 |
| No log | 2.5 | 200 | 0.1731 | 0.9505 | 0.9702 | 0.9602 | 0.9584 |
| No log | 3.75 | 300 | 0.1432 | 0.9559 | 0.9693 | 0.9626 | 0.9684 |
| No log | 5.0 | 400 | 0.0925 | 0.9745 | 0.9809 | 0.9777 | 0.9808 |
| 0.4385 | 6.25 | 500 | 0.1295 | 0.9695 | 0.9760 | 0.9727 | 0.9748 |
| 0.4385 | 7.5 | 600 | 0.1169 | 0.9696 | 0.9785 | 0.9740 | 0.9758 |
| 0.4385 | 8.75 | 700 | 0.1040 | 0.9769 | 0.9826 | 0.9798 | 0.9812 |
| 0.4385 | 10.0 | 800 | 0.1268 | 0.9696 | 0.9785 | 0.9740 | 0.9771 |
| 0.4385 | 11.25 | 900 | 0.1514 | 0.9687 | 0.9735 | 0.9711 | 0.9716 |
| 0.0431 | 12.5 | 1000 | 0.1230 | 0.9794 | 0.9843 | 0.9818 | 0.9812 |
| 0.0431 | 13.75 | 1100 | 0.1327 | 0.9786 | 0.9834 | 0.9810 | 0.9794 |
| 0.0431 | 15.0 | 1200 | 0.1300 | 0.9761 | 0.9809 | 0.9785 | 0.9794 |
| 0.0431 | 16.25 | 1300 | 0.1312 | 0.9802 | 0.9843 | 0.9822 | 0.9812 |
| 0.0431 | 17.5 | 1400 | 0.1358 | 0.9761 | 0.9818 | 0.9789 | 0.9799 |
| 0.0146 | 18.75 | 1500 | 0.1205 | 0.9810 | 0.9843 | 0.9826 | 0.9817 |
| 0.0146 | 20.0 | 1600 | 0.1481 | 0.9753 | 0.9826 | 0.9790 | 0.9785 |
| 0.0146 | 21.25 | 1700 | 0.1710 | 0.9728 | 0.9768 | 0.9748 | 0.9726 |
| 0.0146 | 22.5 | 1800 | 0.1969 | 0.9622 | 0.9693 | 0.9657 | 0.9680 |
| 0.0146 | 23.75 | 1900 | 0.1613 | 0.9745 | 0.9801 | 0.9773 | 0.9780 |
| 0.0084 | 25.0 | 2000 | 0.1713 | 0.9720 | 0.9793 | 0.9757 | 0.9758 |
| 0.0084 | 26.25 | 2100 | 0.1414 | 0.9761 | 0.9826 | 0.9794 | 0.9794 |
| 0.0084 | 27.5 | 2200 | 0.1510 | 0.9737 | 0.9809 | 0.9773 | 0.9780 |
| 0.0084 | 28.75 | 2300 | 0.1435 | 0.9794 | 0.9851 | 0.9822 | 0.9803 |
| 0.0084 | 30.0 | 2400 | 0.1685 | 0.9728 | 0.9793 | 0.9761 | 0.9758 |
| 0.0047 | 31.25 | 2500 | 0.1620 | 0.9728 | 0.9793 | 0.9761 | 0.9762 |
| 0.0047 | 32.5 | 2600 | 0.1549 | 0.9761 | 0.9818 | 0.9789 | 0.9780 |
| 0.0047 | 33.75 | 2700 | 0.1566 | 0.9777 | 0.9826 | 0.9802 | 0.9785 |
| 0.0047 | 35.0 | 2800 | 0.1627 | 0.9769 | 0.9826 | 0.9798 | 0.9785 |
| 0.0047 | 36.25 | 2900 | 0.1580 | 0.9777 | 0.9826 | 0.9802 | 0.9785 |
| 0.0034 | 37.5 | 3000 | 0.1592 | 0.9777 | 0.9826 | 0.9802 | 0.9785 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|