Edit model card

This is the fully trained version (with fixed formatting!!).

Dataset used: Gryphe/Sonnet3.5-SlimOrcaDedupCleaned which was further filtered to remove prompts/examples that are longer than 4076 tokens (removed about 385 examples).

Prompt format is: ChatML

LoRA: mpasila/Viking-SlimSonnet-v1-LoRA-7B

Trained with regular LoRA (not quantized/QLoRA) and LoRA rank was 128 and Alpha set to 32. Trained for 1 epoch using A40 for about 23 hours.

Uploaded model

  • Developed by: mpasila
  • License: apache-2.0
  • Finetuned from model : LumiOpen/Viking-7B

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
21
Safetensors
Model size
7.55B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mpasila/Viking-SlimSonnet-v1-7B

Base model

LumiOpen/Viking-7B
Finetuned
this model
Quantizations
1 model

Datasets used to train mpasila/Viking-SlimSonnet-v1-7B