OllieStanley commited on
Commit
934cccc
1 Parent(s): 3de87c4

Update README for SFT-7

Browse files
Files changed (1) hide show
  1. README.md +138 -32
README.md CHANGED
@@ -2,19 +2,21 @@
2
  license: other
3
  ---
4
 
5
- # OpenAssistant LLaMa-Based Models
6
 
7
- Due to the license attached to LLaMa models by Meta AI it is not possible to directly distribute LLaMa-based models. Instead we provide XOR weights for the OA models.
8
 
9
  Thanks to Mick for writing the `xor_codec.py` script which enables this process
10
 
11
  ## The Process
12
 
13
- Note: This process applies to `oasst-sft-6-llama-30b` model. The same process can be applied to other models in future, but the checksums will be different..
14
 
15
- To use OpenAssistant LLaMa-Based Models, you need to have a copy of the original LLaMa model weights and add them to a `llama` subdirectory here.
16
 
17
- Ensure your LLaMa 30B checkpoint matches the correct md5sums:
 
 
18
 
19
  ```
20
  f856e9d99c30855d6ead4d00cc3a5573 consolidated.00.pth
@@ -24,63 +26,167 @@ ea0405cdb5bc638fee12de614f729ebc consolidated.03.pth
24
  4babdbd05b8923226a9e9622492054b6 params.json
25
  ```
26
 
27
- These can be converted to HuggingFace Transformers-compatible weights using the script available [here](https://github.com/huggingface/transformers/blob/28f26c107b4a1c5c7e32ed4d9575622da0627a40/src/transformers/models/llama/convert_llama_weights_to_hf.py).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
- **Important**: It was tested with git version transformers 4.28.0.dev0 (git hash: **28f26c107b4a1c5c7e32ed4d9575622da0627a40**). Make sure the package tokenizers 0.13.3 is installed. Use of different versions may result in broken outputs.
30
 
31
  ```
32
- PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python python convert_llama_weights_to_hf.py --input_dir ~/llama/ --output_dir ~/llama30b_hf/ --model_size 30B
33
  ```
34
 
35
- Run `find -type f -exec md5sum "{}" + > checklist.chk` in the conversion target directory. This should produce a `checklist.chk` with exactly the following content if your files are correct:
36
 
37
  ```
38
- d0e13331c103453e9e087d59dcf05432 ./pytorch_model-00001-of-00007.bin
39
- 29aae4d31a0a4fe6906353001341d493 ./pytorch_model-00002-of-00007.bin
40
- b40838eb4e68e087b15b3d653ca1f5d7 ./pytorch_model-00003-of-00007.bin
41
- f845ecc481cb92b8a0586c2ce288b828 ./pytorch_model-00004-of-00007.bin
42
- f3b13d089840e6caf22cd6dd05b77ef0 ./pytorch_model-00005-of-00007.bin
43
- 12e0d2d7a9c00c4237b1b0143c48a05e ./pytorch_model-00007-of-00007.bin
44
- 1348f7c8bb3ee4408b69305a10bdfafb ./pytorch_model-00006-of-00007.bin
45
  aee09e21813368c49baaece120125ae3 ./generation_config.json
 
 
46
  eeec4125e9c7560836b4873b6f8e3025 ./tokenizer.model
 
47
  598538f18fed1877b41f77de034c0c8a ./config.json
48
  fdb311c39b8659a5d5c1991339bafc09 ./tokenizer.json
49
- b77e99aa2ddc3df500c2b2dc4455a6af ./pytorch_model.bin.index.json
50
  edd1a5897748864768b1fab645b31491 ./tokenizer_config.json
51
  6b2e0a735969660e720c27061ef3f3d3 ./special_tokens_map.json
 
52
  ```
53
 
54
- Once you have LLaMa weights in the correct format, you can apply the XOR decoding:
 
 
55
 
56
  ```
57
- python xor_codec.py oasst-sft-6-llama-30b/ oasst-sft-6-llama-30b-xor/ llama30b_hf/
58
  ```
59
 
60
- You should expect to see one warning message during execution:
61
 
62
  `Exception when processing 'added_tokens.json'`
63
 
64
- This is normal. If similar messages appear for other files, something has gone wrong.
65
 
66
- Now run `find -type f -exec md5sum "{}" + > checklist.chk` in the output directory (here `oasst-sft-6-llama-30b`). You should get a file with exactly these contents:
67
 
68
  ```
69
- 970e99665d66ba3fad6fdf9b4910acc5 ./pytorch_model-00007-of-00007.bin
70
- 659fcb7598dcd22e7d008189ecb2bb42 ./pytorch_model-00003-of-00007.bin
71
- ff6e4cf43ddf02fb5d3960f850af1220 ./pytorch_model-00001-of-00007.bin
72
  27b0dc092f99aa2efaf467b2d8026c3f ./added_tokens.json
73
  aee09e21813368c49baaece120125ae3 ./generation_config.json
74
- 740c324ae65b1ec25976643cda79e479 ./pytorch_model-00005-of-00007.bin
75
- f7aefb4c63be2ac512fd905b45295235 ./pytorch_model-00004-of-00007.bin
76
  eeec4125e9c7560836b4873b6f8e3025 ./tokenizer.model
77
- 369df2f0e38bda0d9629a12a77c10dfc ./pytorch_model-00006-of-00007.bin
78
- 27b9c7c8c62db80e92de14724f4950f3 ./config.json
79
  deb33dd4ffc3d2baddcce275a00b7c1b ./tokenizer.json
80
  76d47e4f51a8df1d703c6f594981fcab ./pytorch_model.bin.index.json
81
  ed59bfee4e87b9193fea5897d610ab24 ./tokenizer_config.json
82
- 130f5e690becc2223f59384887c2a505 ./special_tokens_map.json
83
- ae48c4c68e4e171d502dd0896aa19a84 ./pytorch_model-00002-of-00007.bin
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84
  ```
85
 
86
- If so you have successfully decoded the weights and should be able to use the model with HuggingFace Transformers.
 
2
  license: other
3
  ---
4
 
5
+ # OpenAssistant LLaMA 30B SFT 7
6
 
7
+ Due to the license attached to LLaMA models by Meta AI it is not possible to directly distribute LLaMA-based models. Instead we provide XOR weights for the OA models.
8
 
9
  Thanks to Mick for writing the `xor_codec.py` script which enables this process
10
 
11
  ## The Process
12
 
13
+ Note: This process applies to `oasst-sft-7-llama-30b` model. The same process can be applied to other models in future, but the checksums will be different..
14
 
15
+ **This process is tested only on Linux (specifically Ubuntu). Some users have reported that the process does not work on Windows. We recommend using WSL if you only have a Windows machine.**
16
 
17
+ To use OpenAssistant LLaMA-Based Models, you should have a copy of the original LLaMA model weights and add them to a `llama` subdirectory here. If you cannot obtain the original LLaMA, see the note in italic below for a possible alternative.
18
+
19
+ Ensure your LLaMA 30B checkpoint matches the correct md5sums:
20
 
21
  ```
22
  f856e9d99c30855d6ead4d00cc3a5573 consolidated.00.pth
 
26
  4babdbd05b8923226a9e9622492054b6 params.json
27
  ```
28
 
29
+ *If you do not have a copy of the original LLaMA weights and cannot obtain one, you may still be able to complete this process. Some users have reported that [this model](https://huggingface.co/elinas/llama-30b-hf-transformers-4.29) can be used as a base for the XOR conversion. This will also allow you to skip to Step 7. However, we only support conversion starting from LLaMA original checkpoint and cannot provide support if you experience issues with this alternative approach.*
30
+
31
+ **Important: Follow these exact steps to convert your original LLaMA checkpoint to a HuggingFace Transformers-compatible format. If you use the wrong versions of any dependency, you risk ending up with weights which are not compatible with the XOR files.**
32
+
33
+ 1. Create a clean Python **3.10** virtual environment & activate it:
34
+
35
+ ```
36
+ python3.10 -m venv xor_venv
37
+ source xor_venv/bin/activate
38
+ ```
39
+
40
+ 2. Clone transformers repo and switch to tested version:
41
+
42
+ ```
43
+ git clone https://github.com/huggingface/transformers.git
44
+ cd transformers
45
+ git checkout d04ec99bec8a0b432fc03ed60cea9a1a20ebaf3c
46
+ pip install .
47
+ ```
48
+
49
+ 3. Install **exactly** these dependency versions:
50
+
51
+ ```
52
+ pip install torch==1.13.1 accelerate==0.18.0 sentencepiece==0.1.98 protobuf==3.20.1
53
+ ```
54
+
55
+ 4. Check `pip freeze` output:
56
+
57
+ ```
58
+ accelerate==0.18.0
59
+ certifi==2022.12.7
60
+ charset-normalizer==3.1.0
61
+ filelock==3.12.0
62
+ huggingface-hub==0.13.4
63
+ idna==3.4
64
+ numpy==1.24.2
65
+ nvidia-cublas-cu11==11.10.3.66
66
+ nvidia-cuda-nvrtc-cu11==11.7.99
67
+ nvidia-cuda-runtime-cu11==11.7.99
68
+ nvidia-cudnn-cu11==8.5.0.96
69
+ packaging==23.1
70
+ protobuf==3.20.1
71
+ psutil==5.9.5
72
+ PyYAML==6.0
73
+ regex==2023.3.23
74
+ requests==2.28.2
75
+ sentencepiece==0.1.98
76
+ tokenizers==0.13.3
77
+ torch==1.13.1
78
+ tqdm==4.65.0
79
+ transformers @ file:///mnt/data/koepf/transformers
80
+ typing_extensions==4.5.0
81
+ urllib3==1.26.15
82
+ ```
83
 
84
+ 5. While in `transformers` repo root, run HF LLaMA conversion script:
85
 
86
  ```
87
+ python src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir <input_path_llama_base> --output_dir <output_path_llama30b_hf> --model_size 30B
88
  ```
89
 
90
+ 6. Run `find . -type f -exec md5sum "{}" +` in the conversion target directory (`output_dir`). This should produce exactly the following checksums if your files are correct:
91
 
92
  ```
93
+ 462a2d07f65776f27c0facfa2affb9f9 ./pytorch_model-00007-of-00007.bin
94
+ e1dc8c48a65279fb1fbccff14562e6a3 ./pytorch_model-00003-of-00007.bin
95
+ 9cffb1aeba11b16da84b56abb773d099 ./pytorch_model-00001-of-00007.bin
 
 
 
 
96
  aee09e21813368c49baaece120125ae3 ./generation_config.json
97
+ 92754d6c6f291819ffc3dfcaf470f541 ./pytorch_model-00005-of-00007.bin
98
+ 3eddc6fc02c0172d38727e5826181adb ./pytorch_model-00004-of-00007.bin
99
  eeec4125e9c7560836b4873b6f8e3025 ./tokenizer.model
100
+ 99762d59efa6b96599e863893cf2da02 ./pytorch_model-00006-of-00007.bin
101
  598538f18fed1877b41f77de034c0c8a ./config.json
102
  fdb311c39b8659a5d5c1991339bafc09 ./tokenizer.json
103
+ fecfda4fba7bfd911e187a85db5fa2ef ./pytorch_model.bin.index.json
104
  edd1a5897748864768b1fab645b31491 ./tokenizer_config.json
105
  6b2e0a735969660e720c27061ef3f3d3 ./special_tokens_map.json
106
+ 5cfcb78b908ffa02e681cce69dbe4303 ./pytorch_model-00002-of-00007.bin
107
  ```
108
 
109
+ **Important: You should now have the correct LLaMA weights and be ready to apply the XORs. If the checksums above do not match yours, there is a problem.**
110
+
111
+ 7. Once you have LLaMA weights in the correct format, you can apply the XOR decoding:
112
 
113
  ```
114
+ python xor_codec.py oasst-sft-7-llama-30b/ oasst-sft-7-llama-30b-xor/oasst-sft-7-llama-30b-xor/ llama30b_hf/
115
  ```
116
 
117
+ You should **expect to see one warning message** during execution:
118
 
119
  `Exception when processing 'added_tokens.json'`
120
 
121
+ This is normal. **If similar messages appear for other files, something has gone wrong**.
122
 
123
+ 8. Now run `find . -type f -exec md5sum "{}" +` in the output directory (here `oasst-sft-6-llama-30b`). You should get a file with exactly these checksums:
124
 
125
  ```
126
+ 8ae4537c64a1ef202d1d82eb0d356703 ./pytorch_model-00007-of-00007.bin
127
+ d84f99d23369e159e50cb0597b6c9673 ./pytorch_model-00003-of-00007.bin
128
+ f7de50a725d678eb65cc3dced727842f ./pytorch_model-00001-of-00007.bin
129
  27b0dc092f99aa2efaf467b2d8026c3f ./added_tokens.json
130
  aee09e21813368c49baaece120125ae3 ./generation_config.json
131
+ 31a2b04b139f4af043ad04478f1497f5 ./pytorch_model-00005-of-00007.bin
132
+ a16a2dfacbde77a1659a7c9df7966d0a ./pytorch_model-00004-of-00007.bin
133
  eeec4125e9c7560836b4873b6f8e3025 ./tokenizer.model
134
+ baa778a8679d47b085446faf97b72758 ./pytorch_model-00006-of-00007.bin
135
+ b2d64f2198ab7b53e3b8d12fbcadeb3c ./config.json
136
  deb33dd4ffc3d2baddcce275a00b7c1b ./tokenizer.json
137
  76d47e4f51a8df1d703c6f594981fcab ./pytorch_model.bin.index.json
138
  ed59bfee4e87b9193fea5897d610ab24 ./tokenizer_config.json
139
+ 704373f0c0d62be75e5f7d41d39a7e57 ./special_tokens_map.json
140
+ e836168cdbbb74db51d04f25ed6408ce ./pytorch_model-00002-of-00007.bin
141
+ ```
142
+
143
+ If so you have successfully decoded the weights and should be able to use the model with HuggingFace Transformers. **If your checksums do not match those above, there is a problem.**
144
+
145
+ ### Configuration
146
+
147
+ ```
148
+ llama-30b-sft-7:
149
+ dtype: fp16
150
+ log_dir: "llama_log_30b"
151
+ learning_rate: 1e-5
152
+ model_name: /home/ubuntu/Open-Assistant/model/model_training/.saved/llama-30b-super-pretrain/checkpoint-3500
153
+ #model_name: OpenAssistant/llama-30b-super-pretrain
154
+ output_dir: llama_model_30b
155
+ deepspeed_config: configs/zero3_config_sft.json
156
+ weight_decay: 0.0
157
+ residual_dropout: 0.0
158
+ max_length: 2048
159
+ use_flash_attention: true
160
+ warmup_steps: 20
161
+ gradient_checkpointing: true
162
+ gradient_accumulation_steps: 12
163
+ per_device_train_batch_size: 2
164
+ per_device_eval_batch_size: 3
165
+ eval_steps: 101
166
+ save_steps: 485
167
+ num_train_epochs: 4
168
+ save_total_limit: 3
169
+ use_custom_sampler: true
170
+ sort_by_length: false
171
+ #save_strategy: steps
172
+ save_strategy: epoch
173
+ datasets:
174
+ - oasst_export:
175
+ lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk"
176
+ input_file_path: 2023-04-12_oasst_release_ready_synth.jsonl.gz
177
+ val_split: 0.05
178
+ - vicuna:
179
+ val_split: 0.05
180
+ max_val_set: 800
181
+ fraction: 1.0
182
+ - dolly15k:
183
+ val_split: 0.05
184
+ max_val_set: 300
185
+ - grade_school_math_instructions:
186
+ val_split: 0.05
187
+ - code_alpaca:
188
+ val_split: 0.05
189
+ max_val_set: 250
190
  ```
191
 
192
+ - **OASST dataset paper:** https://arxiv.org/abs/2304.07327