--- license: apache-2.0 base_model: distilroberta-base tags: - generated_from_keras_callback model-index: - name: chunwoolee0/distilroberta-base-finetuned-wikitext2 results: [] --- # chunwoolee0/distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an [wikitext,wikitext-2-raw-v1](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1/test) dataset. It achieves the following results on the evaluation set: - Train Loss: 2.1557 - Validation Loss: 1.8964 - Epoch: 0 ## Model description This model is a distilled version of the RoBERTa-base model. It follows the same training procedure as DistilBERT. ## Intended uses & limitations This is an exercise for finetuning of nlp language modeling for fill-mask. ## Training and evaluation data Wikitext, wikitext-2-raw-v1 is used ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.1557 | 1.8964 | 0 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Datasets 2.14.4 - Tokenizers 0.13.3