Spaces:
Runtime error
Runtime error
File size: 3,251 Bytes
1d569c6 11e772c 1d569c6 cf4bafc 1d569c6 cfd2db8 1d569c6 11e772c 26cbb8b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
---
title: Code Generation with CodeT5
emoji: π»
colorFrom: yellow
colorTo: green
sdk: gradio
sdk_version: 5.27.0
app_file: app.py
pinned: false
license: mit
hf_oauth: true
hf_oauth_scopes:
- inference-api
short_description: 'Leverage CodeT5-base for code generation tasks.'
model_info:
model_name: Salesforce/codet5-base
model_type: Encoder-Decoder Transformer
architecture: T5-based
pretraining_tasks:
- Denoising
- Bimodal Dual Generation
training_data:
- CodeSearchNet
- CodeXGLUE
fine_tuning_tasks:
- Code Summarization
- Code Generation
- Code Translation
performance_benchmarks:
- CodeXGLUE
paper: 'CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation'
publication_date: '2021-09-02'
arxiv_url: 'https://arxiv.org/abs/2109.00859'
github_url: 'https://github.com/salesforce/CodeT5'
huggingface_url: 'https://huggingface.co./Salesforce/codet5-base'
---
# π Code Generation with CodeT5
Welcome to the **Code Generation with CodeT5** project! This repository demonstrates how to leverage the `Salesforce/codet5-base` model for generating Python code snippets based on textual prompts. The project utilizes Gradio for creating interactive web interfaces and is deployed on Hugging Face Spaces.
## π Repository Contents
- **Model Configuration:**
Stored in `config.json`, this file defines the architecture and settings of the CodeT5 model.
- **Tokenizer Special Tokens:**
Located in `special_tokens_map.json`, it maps special tokens used during tokenization.
- **Training Hyperparameters:**
Found in `training_args.json`, this file contains parameters like learning rate, batch size, and number of epochs used during training.
- **Inference Code:**
The `app.py` script loads the model and provides an interface for code generation.
- **Dependencies:**
Listed in `requirements.txt`, these are the necessary packages for running the model.
- **Documentation:**
This `README.md` provides an overview and guide for setting up and using the repository.
## π§ Setup & Usage
### 1. Clone the Repository
Clone the repository to your local machine:
```bash
git clone https://github.com/your-username/codegen-model-repo.git
cd codegen-model-repo
```
### 2. Install Dependencies
Install the required packages using pip:
```bash
pip install -r requirements.txt
```
### 3. Run the Gradio App
Launch the Gradio app to start generating code:
```bash
streamlit run app.py
```
Access the app in your browser to input prompts and receive generated code snippets.
## π Deploying on Hugging Face Spaces
To deploy your Gradio app on Hugging Face Spaces:
1. **Create a New Space:**
- Visit [Hugging Face Spaces](https://huggingface.co./spaces) and create a new Space.
- Select Gradio as the SDK.
2. **Push Your Code:**
- Initialize a Git repository in your project directory.
- Commit your code and push it to the new Space's repository.
For a detailed walkthrough on deploying Gradio apps to Hugging Face Spaces, refer to this [tutorial](https://pyimagesearch.com/2024/12/30/deploy-gradio-apps-on-hugging-face-spaces/).
## π License
This project is licensed under the MIT License. |