Attention graph features extracted from LMs fine-tuned on linguistic acceptability corpora
Irina Proskurina
iproskurina
AI & ML interests
LLMs: quantization, pre-training
Organizations
Collections
3
models
37
iproskurina/Mistral-7B-v0.1-GPTQ-8bit-g64
Text Generation
•
Updated
•
12
iproskurina/Mistral-7B-v0.1-GPTQ-3bit-g64
Text Generation
•
Updated
•
11
iproskurina/Mistral-7B-v0.3-GPTQ-8bit-g128
Text Generation
•
Updated
•
12
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g128
Text Generation
•
Updated
•
23
iproskurina/Mistral-7B-v0.1-GPTQ-8bit-g128
Text Generation
•
Updated
•
10
iproskurina/Mistral-7B-v0.1-GPTQ-3bit-g128
Text Generation
•
Updated
•
10
iproskurina/Mistral-7B-v0.1-GPTQ-4bit-g128
Text Generation
•
Updated
•
10
iproskurina/opt-13b-GPTQ-4bit-g128
Text Generation
•
Updated
•
23
iproskurina/opt-2.7b-GPTQ-4bit-g128
Text Generation
•
Updated
•
101
iproskurina/opt-6.7b-GPTQ-4bit-g128
Text Generation
•
Updated
•
111