nomic-embed-text-v1.5 - GGUF

Original model: nomic-embed-text-v1.5

Usage

Embedding text with nomic-embed-text requires task instruction prefixes at the beginning of each string.

For example, the code below shows how to use the search_query prefix to embed user questions, e.g. in a RAG application.

To see the full set of task instructions available & how they are designed to be used, visit the model card for nomic-embed-text-v1.5.

Description

This repo contains llama.cpp-compatible files for nomic-embed-text-v1.5 in GGUF format.

llama.cpp will default to 2048 tokens of context with these files. For the full 8192 token context length, you will have to choose a context extension method. The ๐Ÿค— Transformers model uses Dynamic NTK-Aware RoPE scaling, but that is not currently available in llama.cpp.

Example llama.cpp Command

Compute a single embedding:

./embedding -ngl 99 -m nomic-embed-text-v1.5.f16.gguf -c 8192 -b 8192 --rope-scaling yarn --rope-freq-scale .75 -p 'search_query: What is TSNE?'

You can also submit a batch of texts to embed, as long as the total number of tokens does not exceed the context length. Only the first three embeddings are shown by the embedding example.

texts.txt:

search_query: What is TSNE?
search_query: Who is Laurens Van der Maaten?

Compute multiple embeddings:

./embedding -ngl 99 -m nomic-embed-text-v1.5.f16.gguf -c 8192 -b 8192 --rope-scaling yarn --rope-freq-scale .75 -f texts.txt

Compatibility

These files are compatible with llama.cpp as of commit 4524290e8 from 2/15/2024.

Provided Files

The below table shows the mean squared error of the embeddings produced by these quantizations of Nomic Embed relative to the Sentence Transformers implementation.

Name Quant Size MSE
nomic-embed-text-v1.5.Q2_K.gguf Q2_K 48 MiB 2.33e-03
nomic-embed-text-v1.5.Q3_K_S.gguf Q3_K_S 57 MiB 1.19e-03
nomic-embed-text-v1.5.Q3_K_M.gguf Q3_K_M 65 MiB 8.26e-04
nomic-embed-text-v1.5.Q3_K_L.gguf Q3_K_L 69 MiB 7.93e-04
nomic-embed-text-v1.5.Q4_0.gguf Q4_0 75 MiB 6.32e-04
nomic-embed-text-v1.5.Q4_K_S.gguf Q4_K_S 75 MiB 6.71e-04
nomic-embed-text-v1.5.Q4_K_M.gguf Q4_K_M 81 MiB 2.42e-04
nomic-embed-text-v1.5.Q5_0.gguf Q5_0 91 MiB 2.35e-04
nomic-embed-text-v1.5.Q5_K_S.gguf Q5_K_S 91 MiB 2.00e-04
nomic-embed-text-v1.5.Q5_K_M.gguf Q5_K_M 95 MiB 6.55e-05
nomic-embed-text-v1.5.Q6_K.gguf Q6_K 108 MiB 5.58e-05
nomic-embed-text-v1.5.Q8_0.gguf Q8_0 140 MiB 5.79e-06
nomic-embed-text-v1.5.f16.gguf F16 262 MiB 4.21e-10
nomic-embed-text-v1.5.f32.gguf F32 262 MiB 6.08e-11
Downloads last month
10,933
GGUF
Model size
137M params
Architecture
nomic-bert
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for nomic-ai/nomic-embed-text-v1.5-GGUF

Quantized
(12)
this model

Collection including nomic-ai/nomic-embed-text-v1.5-GGUF