File size: 1,606 Bytes
3190f0f
046d990
da4fe6e
 
 
 
 
3190f0f
 
da4fe6e
 
3190f0f
da4fe6e
 
3190f0f
046d990
da4fe6e
3190f0f
 
da4fe6e
046d990
3190f0f
046d990
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9cc8838
 
 
 
 
 
 
 
046d990
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
license: llama2
base_model: meta-llama/CodeLlama-7b-Python-hf
model_name: CodeLlama-7b-Python GGUF
model_type: llama
language:
- code
tags:
- LLM
- llama2
- llama-2
- CodeLlama
- CodeLlama-Python
- CodeLlama-7B-Python
- lama.cpp
- Python
- 7B
---


# Model Card: Meta CodeLlama-7b-Python gguf

Origin Meta model [CodeLlama-7b-Python](https://llama.meta.com/llama-downloads/), [code llama large language model coding](https://ai.meta.com/blog/code-llama-large-language-model-coding/), [codellama](https://github.com/meta-llama/codellama) converted into gguf format with [llama.cpp](https://github.com/ggerganov/llama.cpp)

*Licen*: "Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved."

[Policy](https://llama.meta.com/use-policy/)


# Run model

```bash
./main -m ggml-model-f32-00001-of-00010.gguf -p "def fibonacci("
```


## Convert to gguf
```bash
python3 convert.py ../codellama/CodeLlama-7b-Python
```


## Split Model

Original Meta `CodeLlama-7b-Python` model converted with [python3 convert.py](https://github.com/ggerganov/llama.cpp) to `gguf` and 
`CodeLlama-7b-Python/ggml-model-f32.gguf` and splitted with [gguf-split](https://github.com/ggerganov/llama.cpp) to smaller size chunks up to `split-max-tensors 32`.

```bash
python3 convert.py ../codellama/CodeLlama-7b-Python
./gguf-split --split --split-max-tensors 32 ./models/CodeLlama-7b-Python/ggml-model-f32.gguf ./models/CodeLlama-7b-Python/ggml-model-f32
```


## Merge-back model use

```bash
./gguf-split --merge ggml-model-f32-00001-of-00010.gguf ggml-model-f32.gguf
```