IQ2_S quant done off an imatrix from a Q4_K quant because I can't run any higher on my potato PC. Use at your own risk.

Downloads last month
201
GGUF
Model size
32.6B params
Architecture
glm4
Hardware compatibility
Log In to view the estimation

2-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ilintar/THUDM-GLM-4-32B-0414-IQ2_S.GGUF

Quantized
(28)
this model