Ollama support

#7
by BB8-dev - opened

‌‌‌When can you support OLLAMA? It's very friendly for low-end hardware.

Ollama supports gguf's, but it fails:
https://huggingface.co./docs/hub/ollama

ollama run hf.co/microsoft/bitnet-b1.58-2B-4T-gguf
pulling manifest
pulling 13939ce50303... 100% ▕████████████████████████████████████████████████████████▏ 1.8 GB
pulling d3e74eb82b03... 100% ▕████████████████████████████████████████████████████████▏ 46 B
pulling 33628a28ae3a... 100% ▕████████████████████████████████████████████████████████▏ 19 B
pulling abe99eb73b8f... 100% ▕████████████████████████████████████████████████████████▏ 201 B
verifying sha256 digest
writing manifest
success
Error: unable to load model: C:\Users\xxxxx.ollama\models\blobs\sha256-13939ce5030319a35db346e5dba7a3a3bd599dfc18b113a2a97446ff964714c5

I'm also getting issues:

docker exec ollama ollama run hf.co/microsoft/bitnet-b1.58-2B-4T-gguf
pulling manifest
pulling 13939ce50303... 100% ▕████████████████▏ 1.8 GB
pulling d3e74eb82b03... 100% ▕████████████████▏ 46 B
pulling 33628a28ae3a... 100% ▕████████████████▏ 19 B
pulling abe99eb73b8f... 100% ▕████████████████▏ 201 B
verifying sha256 digest
writing manifest
success
Error: llama runner process has terminated: GGML_ASSERT(0 <= info->type && info->type < GGML_TYPE_COUNT) failed

Version 0.6.6 :
image.png

I'm also getting issues:
Screenshot 2025-04-25 111535.png

[ERROR: unable to load model: /root/.ollama/models/blobs/sha256-4221b252fdd5fd25e15847adfeb5ee88886506ba50b8a34548374492884c2162]

image.png
Got the same issues in here.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment