sade-adrien commited on
Commit
80c6da8
1 Parent(s): 9db76cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -22,7 +22,7 @@ It achieves the following results on the evaluation set:
22
  This model is a fine-tuning of Mistral-7B-Instruct-v0.1.
23
  This FT was done with full attention (removing the 4k SWA).
24
  This FT was using a Position Interpolation factor of 0.5 (Linear RoPE scaling).
25
- Please note that the RoPE scaling factor should be determined by L'/L where L is the pre-training max context length and L' is the new max context length. In our case, we are just making experiments (and for us we would have had L'/L = 7200/8096 > 1 which did not require any PI scaling).
26
 
27
  ## Intended uses & limitations
28
 
 
22
  This model is a fine-tuning of Mistral-7B-Instruct-v0.1.
23
  This FT was done with full attention (removing the 4k SWA).
24
  This FT was using a Position Interpolation factor of 0.5 (Linear RoPE scaling).
25
+ Please note that the RoPE scaling factor should be determined by L/L' where L is the pre-training max context length and L' is the new max context length. In our case, we are just making experiments (and for us we would have had L/L' = 8096/7200 > 1 which did not require any PI scaling).
26
 
27
  ## Intended uses & limitations
28