aswain4 commited on
Commit
0676beb
·
verified ·
1 Parent(s): 5189249

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -14,15 +14,13 @@ license: apache-2.0
14
  <!-- Provide a quick summary of what the model is/does. -->
15
 
16
 
17
-
18
- ## Model Details
19
 
20
  ### Model Description
21
 
22
  <!-- Provide a longer summary of what this model is. -->
23
 
24
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
25
-
26
  - **Developed by:** Alden Swain
27
  - **Model type:** Text Generation
28
  - **Language(s) (NLP):** English
 
14
  <!-- Provide a quick summary of what the model is/does. -->
15
 
16
 
17
+ ## Model Summary
18
+ The goal of this model is to improve the quality and efficiency of code generation from natural language prompts, particularly for Python, since this is the programming language I use most often. Many LLMs produce code that is outdated, inefficient, and bugged. Creating a custom LLM that is able to produce efficient and quality code allows the user to reduce the amount of time it takes to write code and more quickly troubleshoot bugged code. Current models may inadvertently introduce vulnerabilities or generate code that does not adhere to current norms due to the training code data occasionally lacking the safety or output aligned with human coding preferences ([Jiang et al., 2024](https://arxiv.org/html/2406.00515v1)). Additionally, current models are frequently trained on large datasets that encompass a wide range of programming languages, giving the model a roughly equal amount of training time on many languages, which may affect performance on more popular languages ([Jiang et al., 2024](https://arxiv.org/html/2406.00515v1)). To combat this, I selected a model with 7 billion parameters that had a relatively strong performance at baseline on solving code tasks and trained the model on a large code generation dataset (~136,000 rows) that was ~60% Python code. I utilized a Program of Thought (PoT) prompting approach and LoRA training method to create an updated model. Finally, I compared the benchmark performances of MBPP, HumanEval, and MMLU on the updated model to the baseline model. The updated model had little improvement from the base model. For the tested benchmarks, MBPP rose from 37.6% to 40.2% on the first pass, and HumanEval first-pass accuracy dropped from 0.6% to 0%; however, the code appeared to have better format than the base model, and MMLU stayed about the same, with 59.6% at baseline and 59.1% after training.
19
 
20
  ### Model Description
21
 
22
  <!-- Provide a longer summary of what this model is. -->
23
 
 
 
24
  - **Developed by:** Alden Swain
25
  - **Model type:** Text Generation
26
  - **Language(s) (NLP):** English