malhajar bezir commited on
Commit
fd6bbaa
1 Parent(s): 05f80b7

- typo fix (c84124b651de3f25ed72ab832568010fd0c3fa8a)


Co-authored-by: Abdullah Bezir <[email protected]>

Files changed (1) hide show
  1. src/display/about.py +1 -1
src/display/about.py CHANGED
@@ -49,7 +49,7 @@ I use LM-Evaluation-Harness-Turkish, a version of the LM Evaluation Harness adap
49
  1) Set Up the repo: Clone the "lm-evaluation-harness_turkish" from https://github.com/malhajar17/lm-evaluation-harness_turkish and follow the installation instructions.
50
  2) Run Evaluations: To get the results as on the leaderboard (Some tests might show small variations), use the following command, adjusting for your model. For example, with the Trendyol model:
51
  ```python
52
- lm_eval --model vllm --model_args pretrained=Orbina/Orbita-v0.1 --tasks mmlu_tr_v0.2,arc_tr-v0.2,gsm8k_tr-v0.2,hellaswag_tr-v0.2,truthfulqa_v0.2,winogrande_tr_v0.2 --output /workspace/Orbina/Orbita-v0.1
53
  ```
54
  3) Report Results: The results file generated is then uploaded to the OpenLLM Turkish Leaderboard.
55
 
 
49
  1) Set Up the repo: Clone the "lm-evaluation-harness_turkish" from https://github.com/malhajar17/lm-evaluation-harness_turkish and follow the installation instructions.
50
  2) Run Evaluations: To get the results as on the leaderboard (Some tests might show small variations), use the following command, adjusting for your model. For example, with the Trendyol model:
51
  ```python
52
+ lm_eval --model vllm --model_args pretrained=Orbina/Orbita-v0.1 --tasks mmlu_tr_v0.2,arc_tr-v0.2,gsm8k_tr-v0.2,hellaswag_tr-v0.2,truthfulqa_v0.2,winogrande_tr-v0.2 --output /workspace/Orbina/Orbita-v0.1
53
  ```
54
  3) Report Results: The results file generated is then uploaded to the OpenLLM Turkish Leaderboard.
55