CodeT5SmallCAPS commited on
Commit
60bd156
·
1 Parent(s): b9eaa28

Upload Readme.md

Browse files

added readme file

Files changed (1) hide show
  1. Readme.md +57 -0
Readme.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CodeT5SmallCAPS Experiment Reproducing package
2
+
3
+ - To run the pre-train objective use the following scripts:
4
+
5
+ - Reproduce CodeT5SmallCAPS with all objectives:
6
+
7
+ - Navigate the folder `Pre-training` containing the `CodeT5SmallCAPS.py` file
8
+ - Then, run `Python CodeT5SmallCAPS.py --train-tt --train-cs --train-pd`
9
+
10
+ - The pretrained model is released on [hugging face](https://huggingface.co/CodeT5SmallCAPS/CAPS_pretrained), therefore it automatically loads.
11
+
12
+ - To run the ablation studies:
13
+
14
+ - Ablation 1: `Python CodeT5SmallCAPS.py --train-tt`
15
+ - Ablation 2: `Python CodeT5SmallCAPS.py --train-tt --train-cs`
16
+ - Ablation 3: `Python CodeT5SmallCAPS.py --train-tt --train-cs --train-pd`
17
+
18
+ - To `Fine-tuning` CodeT5SmallCAPS on downstream tasks:
19
+
20
+ - Navigate to the `Fine-tuning` folder and then `Downstream task` folder:
21
+
22
+ - Code Clone Detection:
23
+ - Follow the instruction of `readme.md` file.
24
+
25
+ - Code Translation:
26
+
27
+ - Run `setup.sh` file.
28
+ - Navigate to the `scripts/finetune` and run `translate.sh` file.
29
+
30
+ - To extract the programming language features (i.e., `token type`, `code sememe`, and `code dependencies`)
31
+ - We used open source datasets to extract language features. we released the extracted datasets on the Hugging Face:
32
+ - `LT_Java` : [CodeT5SmallCAPS/CAPS_Java](https://huggingface.co/datasets/CodeT5SmallCAPS/CAPS_Java)
33
+ - `LT_Python` : [CodeT5SmallCAPS/CAPS_Python](https://huggingface.co/datasets/CodeT5SmallCAPS/CAPS_Python)
34
+ - `LT_Java_Dependency` : [CodeT5SmallCAPS/CAPS_Java_Dependency](https://huggingface.co/datasets/CodeT5SmallCAPS/CAPS_Java_Dependency)
35
+
36
+ - Navigate to the utils directory:
37
+ - Use either the `Java` or `Python` notebook file to run over your dataset.
38
+ - Run the cells, for which, you want to extract the features.
39
+
40
+ - Dependencies:
41
+ - Feature extraction dependencies:
42
+ ```bash
43
+ - pip install ast-comments
44
+ - pip install ast
45
+ - pip install javalang
46
+ - pip install tree-sitter
47
+
48
+ - Model training dependencies:
49
+ ``` bash
50
+ - pip install transformers
51
+ - pip install datasets
52
+ - pip install pytorch_lightning
53
+ - pip install torch
54
+
55
+ - Install the required packages:
56
+ ``` bash
57
+ - pip install -r requirements.txt