Gemma-3-27B Instruct Uncensored 6-bit MLX

Uncensored version of Gemma 3 27B.

Also you can try new uncensored version: Amoral Gemma-3 27B 6-bit MLX

Technical Details

Supports a context length of 128k tokens, with a max output of 8192.

Multimodal supporting images normalized to 896 x 896 resolution.

Refer to the original model card and uncensored model for more details on the model.

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model TheCluster/gemma-3-27b-it-uncensored-mlx-6bit --max-tokens 256 --temperature 0.4 --prompt "Describe this image." --image <path_to_image>

Source

This model was converted to MLX format from nidum/Nidum-gemma-3-27B-it-Uncensored using mlx-vlm version 0.1.19.

Downloads last month
68
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TheCluster/gemma-3-27b-it-uncensored-mlx-6bit

Quantized
(67)
this model

Collection including TheCluster/gemma-3-27b-it-uncensored-mlx-6bit