Text Generation
Transformers
PyTorch
Italian
English
mistral
conversational
text-generation-inference
Inference Endpoints
File size: 4,646 Bytes
458f5ed
 
 
 
 
 
 
 
 
 
 
 
 
9d71359
458f5ed
 
 
 
 
9d71359
458f5ed
 
 
 
 
9d71359
458f5ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9d71359
458f5ed
 
 
9d71359
458f5ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9d71359
458f5ed
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
license: apache-2.0
datasets:
- andreabac3/Quora-Italian-Fauno-Baize
- andreabac3/StackOverflow-Italian-Fauno-Baize
- andreabac3/MedQuaAD-Italian-Fauno-Baize
language:
- it
- en
pipeline_tag: text-generation
---
# cerbero-7b Italian LLM ๐Ÿš€ 

> ๐Ÿ“ข **Cerbero-7b** is the first **100% Free** and Open Source **Italian Large Language Model** (LLM) ready to be used for **research** or **commercial applications**.

<p align="center">
  <img width="300" height="300" src="./README.md.d/cerbero.png">
</p>

Built on [**mistral-7b**](https://mistral.ai/news/announcing-mistral-7b/), which outperforms Llama2 13B across all benchmarks and surpasses Llama1 34B in numerous metrics.

**cerbero-7b** is specifically crafted to fill the void in Italy's AI landscape.

A **cambrian explosion** of **Italian Language Models** is essential for building advanced AI architectures that can cater to the diverse needs of the population.

**cerbero-7b**, alongside companions like [**Camoscio**](https://github.com/teelinsan/camoscio) and [**Fauno**](https://github.com/RSTLess-research/Fauno-Italian-LLM), aims to help **kick-start** this **revolution** in Italy, ushering in an era where sophisticated **AI solutions** can seamlessly interact with and understand the intricacies of the **Italian language**, thereby empowering **innovation** across **industries** and fostering a deeper **connection** between **technology** and the **people** it serves.

**cerbero-7b** is released under the **permissive** Apache 2.0 **license**, allowing **unrestricted usage**, even **for commercial applications**.

## Why Cerbero? ๐Ÿค”

The name "Cerbero," inspired by the three-headed dog that guards the gates of the Underworld in Greek mythology, encapsulates the essence of our model, drawing strength from three foundational pillars:

- **Base Model: mistral-7b** ๐Ÿ—๏ธ
  cerbero-7b builds upon the formidable **mistral-7b** as its base model. This choice ensures a robust foundation, leveraging the power and capabilities of a cutting-edge language model.

- **Datasets: Fauno Dataset** ๐Ÿ“š
  Utilizing the comprehensive **fauno dataset**, cerbero-7b gains a diverse and rich understanding of the Italian language. The incorporation of varied data sources contributes to its versatility in handling a wide array of tasks.

- **Licensing: Apache 2.0** ๐Ÿ•Š๏ธ
  Released under the **permissive Apache 2.0 license**, cerbero-7b promotes openness and collaboration. This licensing choice empowers developers with the freedom for unrestricted usage, fostering a community-driven approach to advancing AI in Italy and beyond.

## Training Details ๐Ÿš€

cerbero-7b is **fully fine-tuned**, distinguishing itself from LORA or QLORA fine-tunes. 
The model is trained on an expansive Italian Large Language Model (LLM) using synthetic datasets generated through dynamic self-chat on a large context window of **8192 tokens**

### Dataset Composition ๐Ÿ“Š

We employed a **refined** version of the [Fauno training dataset](https://github.com/RSTLess-research/Fauno-Italian-LLM). The training data covers a broad spectrum, incorporating:

- **Medical Data:** Capturing nuances in medical language. ๐Ÿฉบ
- **Technical Content:** Extracted from Stack Overflow to enhance the model's understanding of technical discourse. ๐Ÿ’ป
- **Quora Discussions:** Providing valuable insights into common queries and language usage. โ“
- **Alpaca Data Translation:** Italian-translated content from Alpaca contributes to the model's language richness and contextual understanding. ๐Ÿฆ™

### Training Setup โš™๏ธ

cerbero-7b is trained on an NVIDIA DGX H100:

- **Hardware:** Utilizing 8xH100 GPUs, each with 80 GB VRAM. ๐Ÿ–ฅ๏ธ
- **Parallelism:** DeepSpeed Zero stage 1 parallelism for optimal training efficiency.โœจ

The model has been trained for **3 epochs**, ensuring a convergence of knowledge and proficiency in handling diverse linguistic tasks.

## Getting Started ๐Ÿš€

You can load cerbero-7b using [๐Ÿค—transformers](https://huggingface.co/docs/transformers/index)


```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("galatolo/cerbero-7b")
tokenizer = AutoTokenizer.from_pretrained("galatolo/cerbero-7b")

prompt = """Questa รจ una conversazione tra un umano ed un assistente AI.
[|Umano|] Come posso distinguere un AI da un umano?
[|AI|]"""

input_ids = tokenizer(prompt, return_tensors='pt').input_ids
with torch.no_grad():
    output_ids = model.generate(input_ids, max_new_tokens=128)

generated_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(generated_text)
```