Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,6 @@ license: apache-2.0
|
|
15 |
|
16 |
|
17 |
## Model Summary
|
18 |
-
The goal of this model is to improve the quality and efficiency of code generation from natural language prompts, particularly for Python, since this is the programming language I use most often. Many LLMs produce code that is outdated, inefficient, and bugged. Creating a custom LLM that is able to produce efficient and quality code allows the user to reduce the amount of time it takes to write code and more quickly troubleshoot bugged code. Current models may inadvertently introduce vulnerabilities or generate code that does not adhere to current norms due to the training code data occasionally lacking the safety or output aligned with human coding preferences ([Jiang et al., 2024](https://arxiv.org/html/2406.00515v1)). Additionally, current models are frequently trained on large datasets that encompass a wide range of programming languages, giving the model a roughly equal amount of training time on many languages, which may affect performance on more popular languages ([Jiang et al., 2024](https://arxiv.org/html/2406.00515v1)). To combat this, I selected a model with 7 billion parameters that had a relatively strong performance at baseline on solving code tasks and trained the model on a large code generation dataset (~136,000 rows) that was ~60% Python code. I utilized a Program of Thought (PoT) prompting approach and LoRA training method to create an updated model. Finally, I compared the benchmark performances of MBPP, HumanEval, and MMLU on the updated model to the baseline model. The updated model had little improvement from the base model. For the tested benchmarks, MBPP rose from 37.6% to 40.2% on the first pass, and HumanEval first-pass accuracy dropped from 0.6% to 0%; however, the code appeared to have better format than the base model, and MMLU stayed about the same, with 59.6% at baseline and 59.1% after training.
|
19 |
|
20 |
### Model Description
|
21 |
|
@@ -26,6 +25,8 @@ The goal of this model is to improve the quality and efficiency of code generati
|
|
26 |
- **Language(s) (NLP):** English
|
27 |
- **License:** Apache 2.0
|
28 |
|
|
|
|
|
29 |
### Model Sources
|
30 |
|
31 |
<!-- Provide the basic links for the model. -->
|
@@ -41,19 +42,14 @@ The goal of this model is to improve the quality and efficiency of code generati
|
|
41 |
|
42 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
43 |
|
44 |
-
|
45 |
-
|
46 |
-
### Downstream Use [optional]
|
47 |
-
|
48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
|
50 |
-
[More Information Needed]
|
51 |
|
52 |
### Out-of-Scope Use
|
53 |
|
54 |
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
55 |
|
56 |
-
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
|
|
15 |
|
16 |
|
17 |
## Model Summary
|
|
|
18 |
|
19 |
### Model Description
|
20 |
|
|
|
25 |
- **Language(s) (NLP):** English
|
26 |
- **License:** Apache 2.0
|
27 |
|
28 |
+
The goal of this model is to improve the quality and efficiency of code generation from natural language prompts, particularly for Python, since this is the programming language I use most often. Many LLMs produce code that is outdated, inefficient, and bugged. Creating a custom LLM that is able to produce efficient and quality code allows the user to reduce the amount of time it takes to write code and more quickly troubleshoot bugged code. Current models may inadvertently introduce vulnerabilities or generate code that does not adhere to current norms due to the training code data occasionally lacking the safety or output aligned with human coding preferences ([Jiang et al., 2024](https://arxiv.org/html/2406.00515v1)). Additionally, current models are frequently trained on large datasets that encompass a wide range of programming languages, giving the model a roughly equal amount of training time on many languages, which may affect performance on more popular languages ([Jiang et al., 2024](https://arxiv.org/html/2406.00515v1)). To combat this, I selected a model with 7 billion parameters that had a relatively strong performance at baseline on solving code tasks and trained the model on a large code generation dataset (~136,000 rows) that was ~60% Python code. I utilized a Program of Thought (PoT) prompting approach and LoRA training method to create an updated model. Finally, I compared the benchmark performances of MBPP, HumanEval, and MMLU on the updated model to the baseline model. The updated model had little improvement from the base model. For the tested benchmarks, MBPP rose from 37.6% to 40.2% on the first pass, and HumanEval first-pass accuracy dropped from 0.6% to 0%; however, the code appeared to have better format than the base model, and MMLU stayed about the same, with 59.6% at baseline and 59.1% after training.
|
29 |
+
|
30 |
### Model Sources
|
31 |
|
32 |
<!-- Provide the basic links for the model. -->
|
|
|
42 |
|
43 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
44 |
|
45 |
+
This model is designed to generate quality and efficient code in any programming language, but particularly for Python, given a natural language prompt. It can provide troubleshooting for bugged or broken code that is able to provide feedback on why the initial code was faulty and how the code was improved and fixed.
|
|
|
|
|
|
|
|
|
46 |
|
|
|
47 |
|
48 |
### Out-of-Scope Use
|
49 |
|
50 |
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
51 |
|
52 |
+
This mode is not specifically designed for any other type of task. However, the model appears to still contain roughly the same generalizability as the base model. Users should consider the common limitations of language models as they select use cases and evaluate and mitigate for accuracy, safety, and fairness before using them within a specific downstream use case, particularly if being used in high-risk scenarios.
|
53 |
|
54 |
## Bias, Risks, and Limitations
|
55 |
|