image/png

πŸ‘₯ TwinLlama-3.1-8B

TwinLlama-3.1-8B is a model created for the LLM Engineer's Handbook, trained on mlabonne/llmtwin.

It is designed to act as a digital twin, which is a clone of myself and my co-authors (Paul Iusztin and Alex Vesa), imitating our writing style and drawing knowledge from our articles.


This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
113
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for mlabonne/TwinLlama-3.1-8B

Finetuned
(1134)
this model
Finetunes
1 model
Quantizations
13 models

Dataset used to train mlabonne/TwinLlama-3.1-8B

Spaces using mlabonne/TwinLlama-3.1-8B 11

Collection including mlabonne/TwinLlama-3.1-8B