Edit model card

LLAMA 3 8B with capable to output Traditional Chinese

✨ Recommend using LMStudio for this model

I tried using Ollama to run it, but it became quite delulu.

So for now, I'm sticking with LMStudio :)The performance isn't actually that great, but it's capable of answering some basic questions. Sometimes it just acts really dumb though :(

LLAMA 3.1 can actually output pretty well Chinese, so this repo can be ignored.

Downloads last month
20
GGUF
Model size
8.03B params
Architecture
llama

4-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for suko/Meta-Llama-3-8B-CHT

Quantized
this model

Dataset used to train suko/Meta-Llama-3-8B-CHT