# Supa-AI/Mixtral-8x7B-Instruct-v0.1-gguf
This model was converted to GGUF format from [`mistralai/Mixtral-8x7B-Instruct-v0.1`](https://huggingface.co./mistralai/Mixtral-8x7B-Instruct-v0.1) using llama.cpp.
Refer to the [original model card](https://huggingface.co./mistralai/Mixtral-8x7B-Instruct-v0.1) for more details on the model.
## Available Versions
- `Mixtral-8x7B-Instruct-v0.1.q4_0.gguf` (q4_0)
Mixtral-8x7B-Instruct-v0.1.q4_1.gguf
(q4_1)Mixtral-8x7B-Instruct-v0.1.q5_0.gguf
(q5_0)Mixtral-8x7B-Instruct-v0.1.q5_1.gguf
(q5_1)Mixtral-8x7B-Instruct-v0.1.q8_0.gguf
(q8_0)Mixtral-8x7B-Instruct-v0.1.q3_k_s.gguf
(q3_K_S)Mixtral-8x7B-Instruct-v0.1.q3_k_m.gguf
(q3_K_M)Mixtral-8x7B-Instruct-v0.1.q3_k_l.gguf
(q3_K_L)Mixtral-8x7B-Instruct-v0.1.q4_k_s.gguf
(q4_K_S)Mixtral-8x7B-Instruct-v0.1.q4_k_m.gguf
(q4_K_M)Mixtral-8x7B-Instruct-v0.1.q5_k_s.gguf
(q5_K_S)Mixtral-8x7B-Instruct-v0.1.q5_k_m.gguf
(q5_K_M)Mixtral-8x7B-Instruct-v0.1.q6_k.gguf
(q6_K)## Use with llama.cpp Replace `FILENAME` with one of the above filenames. ### CLI: ```bash llama-cli --hf-repo Supa-AI/Mixtral-8x7B-Instruct-v0.1-gguf --hf-file FILENAME -p "Your prompt here" ``` ### Server: ```bash llama-server --hf-repo Supa-AI/Mixtral-8x7B-Instruct-v0.1-gguf --hf-file FILENAME -c 2048 ``` ## Model Details - **Original Model:** [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co./mistralai/Mixtral-8x7B-Instruct-v0.1) - **Format:** GGUF
- Downloads last month
- 196
Hardware compatibility
Log In
to view the estimation
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Supa-AI/Mixtral-8x7B-Instruct-v0.1-gguf
Base model
mistralai/Mixtral-8x7B-v0.1
Finetuned
mistralai/Mixtral-8x7B-Instruct-v0.1