Update README.md
Browse files
README.md
CHANGED
@@ -26,7 +26,9 @@ pipeline_tag: text-generation
|
|
26 |
- **Language(s) (NLP):** English
|
27 |
- **License:** Apache 2.0
|
28 |
|
29 |
-
The goal of this model is to improve the quality and efficiency of code generation from natural language prompts, particularly for Python, since this is the programming language I use most often. Many LLMs produce code that is outdated, inefficient, and bugged. Creating a custom LLM that is able to produce efficient and quality code allows the user to reduce the amount of time it takes to write code and more quickly troubleshoot bugged code. Current models may inadvertently introduce vulnerabilities or generate code that does not adhere to current norms due to the training code data occasionally lacking the safety or output aligned with human coding preferences ([Jiang et al., 2024](https://arxiv.org/html/2406.00515v1)). Additionally, current models are frequently trained on large datasets that encompass a wide range of programming languages, giving the model a roughly equal amount of training time on many languages, which may affect performance on more popular languages ([Jiang et al., 2024](https://arxiv.org/html/2406.00515v1)).
|
|
|
|
|
30 |
|
31 |
### Model Sources
|
32 |
|
@@ -76,18 +78,11 @@ model = AutoModelForCausalLM.from_pretrained('aswain4/custom_coding_LLM', device
|
|
76 |
|
77 |
### Input Formats
|
78 |
|
79 |
-
Formatting the prompt similarly to the training data will yield the
|
80 |
|
81 |
-
|
82 |
```python
|
83 |
-
prompt =
|
84 |
-
"Instruct: Plan:\n"
|
85 |
-
"1. Analyze the following question: \"Write a Python function to check if a number is a palindrome.\"\n"
|
86 |
-
"2. Think step by step and plan a clear, efficient solution before writing code.\n"
|
87 |
-
"3. Consider any necessary programming constructs or tools.\n"
|
88 |
-
"4. Explain your approach, then write well-organized and well-documented code with in-line comments.\n\n"
|
89 |
-
"Response:"
|
90 |
-
)
|
91 |
|
92 |
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
|
93 |
|
@@ -106,9 +101,16 @@ generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
|
106 |
print(generated_text)
|
107 |
```
|
108 |
|
109 |
-
|
110 |
```python
|
111 |
-
prompt =
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
112 |
|
113 |
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
|
114 |
|
@@ -129,29 +131,7 @@ print(generated_text)
|
|
129 |
|
130 |
### Expected Outputs
|
131 |
|
132 |
-
If the
|
133 |
-
|
134 |
-
PoT output:
|
135 |
-
```text
|
136 |
-
Sure! Here's an example of a Python function that checks if a number is a palindrome:
|
137 |
-
|
138 |
-
def is_palindrome(num):
|
139 |
-
str_num = str(num)
|
140 |
-
if str_num == str_num[::-1]:
|
141 |
-
return True
|
142 |
-
else:
|
143 |
-
return False
|
144 |
-
|
145 |
-
num = 12321
|
146 |
-
result = is_palindrome(num)
|
147 |
-
print(result)
|
148 |
-
|
149 |
-
The function `is_palindrome` takes a number as input and converts it into a string using the `str()` function. It then checks if the string is equal to its reversed version (`str_num[::-1]`). If they are equal, it means the number is a palindrome and the function returns `True`. Otherwise, it returns `False`.
|
150 |
-
|
151 |
-
In the example code, we test the function with the number `12321`. The function call `is_palindrome(num)` returns `True` because `12321` is a palindrome. Finally, the result is printed to the console.
|
152 |
-
|
153 |
-
I hope this helps! Let me know if you have any further questions.
|
154 |
-
```
|
155 |
|
156 |
Question only output:
|
157 |
```text
|
@@ -181,6 +161,28 @@ print(is_palindrome(12321))
|
|
181 |
# In the given example, the number 12321 is a palindrome, so the function returns True.
|
182 |
```
|
183 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
184 |
## Training Details
|
185 |
|
186 |
### Training Data
|
@@ -242,33 +244,41 @@ def create_training_arguments(path, learning_rate = 0.00001, epochs=2, eval_step
|
|
242 |
|
243 |
<!-- This should link to a Dataset Card if possible. -->
|
244 |
|
245 |
-
[
|
246 |
-
|
247 |
-
#### Factors
|
248 |
-
|
249 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
250 |
-
|
251 |
-
[More Information Needed]
|
252 |
|
253 |
#### Metrics
|
254 |
|
255 |
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
256 |
|
257 |
-
[
|
|
|
|
|
|
|
|
|
258 |
|
259 |
### Results
|
260 |
|
261 |
-
[
|
262 |
|
263 |
-
|
|
|
|
|
264 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
265 |
|
|
|
266 |
|
267 |
-
|
268 |
|
269 |
-
|
270 |
|
271 |
-
|
272 |
|
273 |
## Model Card Contact
|
274 |
|
|
|
26 |
- **Language(s) (NLP):** English
|
27 |
- **License:** Apache 2.0
|
28 |
|
29 |
+
The goal of this model is to improve the quality and efficiency of code generation from natural language prompts, particularly for Python, since this is the programming language I use most often. Many LLMs produce code that is outdated, inefficient, and bugged. Creating a custom LLM that is able to produce efficient and quality code allows the user to reduce the amount of time it takes to write code and more quickly troubleshoot bugged code. Current models may inadvertently introduce vulnerabilities or generate code that does not adhere to current norms due to the training code data occasionally lacking the safety or output aligned with human coding preferences ([Jiang et al., 2024](https://arxiv.org/html/2406.00515v1)). Additionally, current models are frequently trained on large datasets that encompass a wide range of programming languages, giving the model a roughly equal amount of training time on many languages, which may affect performance on more popular languages ([Jiang et al., 2024](https://arxiv.org/html/2406.00515v1)).
|
30 |
+
|
31 |
+
To combat this, I selected a model with 7 billion parameters that had a relatively strong performance at baseline on solving code tasks and trained the model on a large code generation dataset (~136,000 rows) that was ~60% Python code. I utilized a Program of Thought (PoT) prompting approach and LoRA training method to create an updated model. Finally, I compared the benchmark performances of MBPP, HumanEval, and MMLU on the updated model to the baseline model. The updated model had little improvement from the base model. For the tested benchmarks, MBPP rose from 37.6% to 40.2% on the first pass, and HumanEval first-pass accuracy dropped from 0.6% to 0%; however, the code appeared to have better format than the base model, and MMLU stayed about the same, with 59.6% at baseline and 59.1% after training.
|
32 |
|
33 |
### Model Sources
|
34 |
|
|
|
78 |
|
79 |
### Input Formats
|
80 |
|
81 |
+
Simply ask the model a question as a string and it will yield quality output. Formatting the prompt similarly to the training data will yield a more detailed explanation of the generated code. This means creating the prompt in a Program of Thought (PoT) technique.
|
82 |
|
83 |
+
Question only prompt:
|
84 |
```python
|
85 |
+
prompt = "Write a Python function to check if a number is a palindrome."
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
86 |
|
87 |
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
|
88 |
|
|
|
101 |
print(generated_text)
|
102 |
```
|
103 |
|
104 |
+
PoT prompt:
|
105 |
```python
|
106 |
+
prompt = (
|
107 |
+
"Instruct: Plan:\n"
|
108 |
+
"1. Analyze the following question: \"Write a Python function to check if a number is a palindrome.\"\n"
|
109 |
+
"2. Think step by step and plan a clear, efficient solution before writing code.\n"
|
110 |
+
"3. Consider any necessary programming constructs or tools.\n"
|
111 |
+
"4. Explain your approach, then write well-organized and well-documented code with in-line comments.\n\n"
|
112 |
+
"Response:"
|
113 |
+
)
|
114 |
|
115 |
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
|
116 |
|
|
|
131 |
|
132 |
### Expected Outputs
|
133 |
|
134 |
+
If the question-only prompting approach is used, the output is expected to yield the generated code followed by an explanation of what is happening in the code. If the PoT prompting approach is used, users should expect the output to yield a response stating it can perform the task, followed by the generated code, and then an explanation of what is happening in the code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
135 |
|
136 |
Question only output:
|
137 |
```text
|
|
|
161 |
# In the given example, the number 12321 is a palindrome, so the function returns True.
|
162 |
```
|
163 |
|
164 |
+
PoT output:
|
165 |
+
```text
|
166 |
+
Sure! Here's an example of a Python function that checks if a number is a palindrome:
|
167 |
+
|
168 |
+
def is_palindrome(num):
|
169 |
+
str_num = str(num)
|
170 |
+
if str_num == str_num[::-1]:
|
171 |
+
return True
|
172 |
+
else:
|
173 |
+
return False
|
174 |
+
|
175 |
+
num = 12321
|
176 |
+
result = is_palindrome(num)
|
177 |
+
print(result)
|
178 |
+
|
179 |
+
The function `is_palindrome` takes a number as input and converts it into a string using the `str()` function. It then checks if the string is equal to its reversed version (`str_num[::-1]`). If they are equal, it means the number is a palindrome and the function returns `True`. Otherwise, it returns `False`.
|
180 |
+
|
181 |
+
In the example code, we test the function with the number `12321`. The function call `is_palindrome(num)` returns `True` because `12321` is a palindrome. Finally, the result is printed to the console.
|
182 |
+
|
183 |
+
I hope this helps! Let me know if you have any further questions.
|
184 |
+
```
|
185 |
+
|
186 |
## Training Details
|
187 |
|
188 |
### Training Data
|
|
|
244 |
|
245 |
<!-- This should link to a Dataset Card if possible. -->
|
246 |
|
247 |
+
The model was tested on three benchmarks: [Mostly Basic Python Problems (MBPP)](https://github.com/google-research/google-research/tree/master/mbpp), [HumanEval](https://github.com/openai/human-eval), and [Massive Multitask Language Understanding (MMLU)](https://huggingface.co/datasets/cais/mmlu). The first two benchmarks aim to assess the model's Python coding capability, and the third benchmark aims to assess the model's generalizability. The entirety of these datasets was used to test the model.
|
|
|
|
|
|
|
|
|
|
|
|
|
248 |
|
249 |
#### Metrics
|
250 |
|
251 |
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
252 |
|
253 |
+
The MBPP benchmark was chosen because it will assess the model’s ability to understand basic programming concepts in the Python language, show the model’s ability to solve simple and well-defined problems, and show if the model is producing correct code for fundamental programming tasks. Similar to this fine-tuned model, the input structure of the MBPP benchmark is a problem description in natural language, and the output is a Python code solution that is expected to handle the given test cases ([Austin et al., 2021](https://arxiv.org/abs/2108.07732)).
|
254 |
+
|
255 |
+
The HumanEval benchmark was chosen as it will assess if the model is overfit for code generation by looking at its basic Python programming capabilities, problem-solving skills, and code correctness through test cases and ensure the model maintains fundamental coding abilities. The input structure is a function signature and doctoring with a description of its function, and the output structure is a complete function implementation in Python ([Chen et al., 2021](https://arxiv.org/abs/2107.03374)). The problems in the HumanEval benchmark are more complex than the problems presented in the MBPP benchmark and, therefore, allow for a comprehensive overview of the model's ability to solve Python coding problems.
|
256 |
+
|
257 |
+
The MMLU benchmark is being used to assess the general knowledge and reasoning ability of the model. While the model's generalizability is not the priority, I thought it would be helpful to know if the model can be used on other types of tasks. The MMLU benchmark will show if there is catastrophic forgetting from the base model. The input structure is a question with multiple-choice answer options from one of 57 different subjects, and the output is the model's selected answer.
|
258 |
|
259 |
### Results
|
260 |
|
261 |
+
The current model was compared to the base [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) model for full disclosure on whether, and to what degree, the model improved. Additionally, the model was compared to the [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) model and the [OpenHands LM v0.1](https://huggingface.co/all-hands/openhands-lm-7b-v0.1) model, which each have a similar number of parameters. I was interested to see how newer LLMs compared to relatively older models.
|
262 |
|
263 |
+
DeepSeek-R1 is a reasoning model, and I wanted to see how a generalized reasoning model would perform on a specific reasoning task. The DeepSeek-R1 model was introduced a little more than 1 year after the base Mistral model was introduced (January 2025 vs. September 2023). Note that DeepSeek-R1 was not considered for the base model due to concern DeepSeek may be banned in the United States during the project.
|
264 |
+
|
265 |
+
OpenHands LM v0.1 is a specialized model built for a wide range of coding tasks and languages. I was interested to see how a model trained for a specific language measures up to a general coding model. The OpenHands LM v0.1 was introduced more recently than all the other comparison models in March 2025. Note that OpenHands LM v0.1 could not be the base model since it did not exist at the start of the project.
|
266 |
|
267 |
+
| Benchmark | Custom-Coding LLM | Base Mistral-7B | DeepSeek R1-7B | OpenHands LM v0.1-7B |
|
268 |
+
|:---------------------:|:-----------------:|:---------------:|:--------------:|:---------------:|
|
269 |
+
| MBPP (at 1-pass) | 40.2 | 37.6 | 47.8 | 61.8 |
|
270 |
+
| HumanEval (at 1-pass) | 0 | 0.6 | 11.6 | 80.5 |
|
271 |
+
| MMLU | 59.2 | 59.6 | 52.6 | 64.6 |
|
272 |
+
|
273 |
+
#### Summary
|
274 |
|
275 |
+
The model has a slight overall performance improvement compared to the base model. While the HumanEval score dropped at 1-pass, the structure of the generated code is actually improved compared to the base model; despite the slightly lower score, the code appears to be more helpful for a user. There is no evidence of catastrophic forgetting due to the similar MMLU scores.
|
276 |
|
277 |
+
The model has better generalizability than DeepSeek R1, but DeepSeek R1 seems to be superior in solving and reasoning through Python problems. The greatest difference in DeepSeek from the current custom model is the HumanEval score at 1-pass, for which DeepSeek has a significant advantage.
|
278 |
|
279 |
+
OpenHands LM v0.1 has a superior performance in all tested benchmarks relative to each comparison model, particularly in the HumanEval benchmark, where it blew away the competition but boasts a strong MBPP score as well. The current custom model did not perform as well as this model.
|
280 |
|
281 |
+
In conclusion, the model shows improvement in the task compared to the base model but does not perform as well as the newer LLMs. Implementing some of the techniques used in these newer models may be able to yield even more improvement in the current custom-coding LLM.
|
282 |
|
283 |
## Model Card Contact
|
284 |
|