--- datasets: - cognitivecomputations/dolphin - jondurbin/airoboros-2.2.1 - cognitivecomputations/dolphin-coder - teknium/openhermes - ise-uiuc/Magicoder-OSS-Instruct-75K - ise-uiuc/Magicoder-Evol-Instruct-110K - m-a-p/Code-Feedback - m-a-p/CodeFeedback-Filtered-Instruction language: - en license: bigcode-openrail-m quantized_by: bartowski pipeline_tag: text-generation --- ## Exllama v2 Quantizations of dolphincoder-starcoder2-15b Using turboderp's ExLlamaV2 v0.0.15 preview for quantization. The "main" branch only contains the measurement.json, download one of the other branches for the model (see below) Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/cognitivecomputations/dolphincoder-starcoder2-15b | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-exl2/tree/8_0) | 8.0 | 8.0 | 16.6 GB | 17.5 GB | 18.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-exl2/tree/6_5) | 6.5 | 8.0 | 13.9 GB | 14.9 GB | 16.2 GB | Near unquantized performance at vastly reduced size, **recommended**. | | [5_0](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-exl2/tree/5_0) | 5.0 | 6.0 | 11.2 GB | 12.2 GB | 13.5 GB | Slightly lower quality vs 6.5. | | [4_25](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-exl2/tree/4_25) | 4.25 | 6.0 | 9.8 GB | 10.7 GB | 12.0 GB | GPTQ equivalent bits per weight. | | [3_5](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-exl2/tree/3_5) | 3.5 | 6.0 | 8.4 GB | 9.3 GB | 10.6 GB | Lower quality, not recommended. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-exl2 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `dolphincoder-starcoder2-15b-exl2`: ```shell mkdir dolphincoder-starcoder2-15b-exl2 huggingface-cli download bartowski/dolphincoder-starcoder2-15b-exl2 --local-dir dolphincoder-starcoder2-15b-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: Linux: ```shell mkdir dolphincoder-starcoder2-15b-exl2-6_5 huggingface-cli download bartowski/dolphincoder-starcoder2-15b-exl2 --revision 6_5 --local-dir dolphincoder-starcoder2-15b-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell mkdir dolphincoder-starcoder2-15b-exl2-6.5 huggingface-cli download bartowski/dolphincoder-starcoder2-15b-exl2 --revision 6_5 --local-dir dolphincoder-starcoder2-15b-exl2-6.5 --local-dir-use-symlinks False ```