Adding Evaluation Results

#1
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -165,3 +165,17 @@ The model achieves the following results without any fine-tuning (zero-shot):
165
  <a href="https://huggingface.co/exbert/?model=gpt2">
166
  <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
167
  </a>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
165
  <a href="https://huggingface.co/exbert/?model=gpt2">
166
  <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
167
  </a>
168
+
169
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
170
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_SaylorTwift__gpt2_test)
171
+
172
+ | Metric | Value |
173
+ |-----------------------|---------------------------|
174
+ | Avg. | 25.02 |
175
+ | ARC (25-shot) | 21.84 |
176
+ | HellaSwag (10-shot) | 31.6 |
177
+ | MMLU (5-shot) | 25.86 |
178
+ | TruthfulQA (0-shot) | 40.67 |
179
+ | Winogrande (5-shot) | 50.12 |
180
+ | GSM8K (5-shot) | 0.3 |
181
+ | DROP (3-shot) | 4.78 |