Edit model card

SChem5Labels-google-t5-v1_1-large-inter_model-shuffle-human_annots_str

This model is a fine-tuned version of google/t5-v1_1-large on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.3184

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 200

Training results

Training Loss Epoch Step Validation Loss
20.749 1.0 25 23.9108
19.7523 2.0 50 22.3687
18.7945 3.0 75 18.8753
16.0524 4.0 100 12.6226
14.3498 5.0 125 9.9242
11.1053 6.0 150 9.2417
10.0075 7.0 175 9.1269
8.7375 8.0 200 8.9141
8.4414 9.0 225 8.8544
8.2513 10.0 250 8.8037
8.2066 11.0 275 8.7007
8.1662 12.0 300 8.5217
7.8864 13.0 325 8.2216
7.7552 14.0 350 7.9527
7.5072 15.0 375 7.7595
7.4867 16.0 400 7.6285
7.3527 17.0 425 7.5356
7.1715 18.0 450 7.4794
7.1983 19.0 475 7.4331
7.1102 20.0 500 7.3796
6.9301 21.0 525 5.8444
1.0577 22.0 550 0.9835
1.0083 23.0 575 0.9668
1.0127 24.0 600 0.9668
0.9964 25.0 625 0.9711
0.9749 26.0 650 0.9647
0.9807 27.0 675 0.9651
0.9811 28.0 700 0.9645
0.9935 29.0 725 0.9646
0.9805 30.0 750 0.9618
1.0088 31.0 775 0.9643
0.9957 32.0 800 0.9629
0.9886 33.0 825 0.9631
0.986 34.0 850 0.9624
0.9826 35.0 875 0.9633

Framework versions

  • Transformers 4.34.0
  • Pytorch 2.1.0+cu121
  • Datasets 2.6.1
  • Tokenizers 0.14.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Model tree for owanr/SChem5Labels-google-t5-v1_1-large-inter_model-shuffle-human_annots_str

Finetuned
this model