Spaces:
Sleeping
Sleeping
separate summary and hf readme
Browse files- README.md +11 -88
- SUMMARY.md +88 -0
README.md
CHANGED
@@ -1,88 +1,11 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
- **9 Label Classification**: Predicts multiple tags per text
|
13 |
-
- **CLI Interface**: Run predictions/evaluations from terminal
|
14 |
-
- **Dual Backend**: Choose between local or HF inference
|
15 |
-
- **GPU Optimized**: CUDA support via Docker
|
16 |
-
|
17 |
-
## Installation
|
18 |
-
```bash
|
19 |
-
git clone https://github.com/your-username/qwen-classifier
|
20 |
-
cd qwen-classifier
|
21 |
-
python3 -m venv .venv
|
22 |
-
source .venv/bin/activate
|
23 |
-
pip install -e .
|
24 |
-
```
|
25 |
-
|
26 |
-
## Usage
|
27 |
-
|
28 |
-
### CLI Prediction
|
29 |
-
```bash
|
30 |
-
# Local inference
|
31 |
-
qwen-clf predict "Your coding exercise text" --backend local
|
32 |
-
|
33 |
-
# HF Space inference
|
34 |
-
qwen-clf predict "Your text" --backend hf --hf-token YOUR_TOKEN
|
35 |
-
```
|
36 |
-
|
37 |
-
### Batch Evaluation
|
38 |
-
```bash
|
39 |
-
qwen-clf evaluate dataset.zip --backend local
|
40 |
-
```
|
41 |
-
|
42 |
-
### API Server
|
43 |
-
```bash
|
44 |
-
uvicorn app:app --host 0.0.0.0 --port 7860
|
45 |
-
```
|
46 |
-
|
47 |
-
#### API Endpoints
|
48 |
-
| Endpoint | Method | Description |
|
49 |
-
|----------|--------|-------------|
|
50 |
-
| `/` | GET | Documentation |
|
51 |
-
| `/predict` | POST | Single text prediction |
|
52 |
-
| `/evaluate` | POST | Batch evaluation (ZIP) |
|
53 |
-
| `/health` | GET | Service status |
|
54 |
-
|
55 |
-
## Docker Deployment
|
56 |
-
```bash
|
57 |
-
# Build with GPU support
|
58 |
-
docker build -t qwen-classifier .
|
59 |
-
|
60 |
-
# Run container
|
61 |
-
docker run -p 7860:7860 --gpus all qwen-classifier
|
62 |
-
```
|
63 |
-
|
64 |
-
## Project Structure
|
65 |
-
```
|
66 |
-
.
|
67 |
-
βββ app.py # FastAPI entry point
|
68 |
-
βββ Dockerfile # GPU-optimized container setup
|
69 |
-
βββ qwen_classifier/ # Core package
|
70 |
-
β βββ cli.py # Command line interface
|
71 |
-
β βββ model.py # Qwen classifier implementation
|
72 |
-
β βββ predict.py # Inference logic
|
73 |
-
β βββ evaluate.py # Batch evaluation
|
74 |
-
βββ requirements.txt # Python dependencies
|
75 |
-
```
|
76 |
-
|
77 |
-
## Configuration
|
78 |
-
Edit `qwen_classifier/config.py` to set:
|
79 |
-
- `TAG_NAMES`: List of 9 classification tags
|
80 |
-
- `HF_REPO`: Default Hugging Face model repo
|
81 |
-
- `DEVICE`: Auto-detected CUDA/CPU
|
82 |
-
|
83 |
-
## Hugging Face Space
|
84 |
-
Live demo:
|
85 |
-
[](https://huggingface.co/spaces/KeivanR/qwen-classifier-demo)
|
86 |
-
|
87 |
-
## License
|
88 |
-
Apache 2.0 Β© Keivan Razban
|
|
|
1 |
+
---
|
2 |
+
title: Qwen Classifier Demo
|
3 |
+
emoji: π’
|
4 |
+
colorFrom: green
|
5 |
+
colorTo: gray
|
6 |
+
sdk: docker
|
7 |
+
pinned: false
|
8 |
+
short_description: Fine tuned Qwen to classify coding exercizes
|
9 |
+
---
|
10 |
+
|
11 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
SUMMARY.md
ADDED
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Qwen Multi-label Text Classifier
|
2 |
+
|
3 |
+
## Overview
|
4 |
+
A multi-label text classifier based on Qwen-1.5B, fine-tuned for coding exercise classification. Supports:
|
5 |
+
- Local CPU/GPU inference
|
6 |
+
- Hugging Face API deployment
|
7 |
+
- Batch evaluation
|
8 |
+
- REST API via FastAPI
|
9 |
+
- Docker deployment
|
10 |
+
|
11 |
+
## Features
|
12 |
+
- **9 Label Classification**: Predicts multiple tags per text
|
13 |
+
- **CLI Interface**: Run predictions/evaluations from terminal
|
14 |
+
- **Dual Backend**: Choose between local or HF inference
|
15 |
+
- **GPU Optimized**: CUDA support via Docker
|
16 |
+
|
17 |
+
## Installation
|
18 |
+
```bash
|
19 |
+
git clone https://github.com/your-username/qwen-classifier
|
20 |
+
cd qwen-classifier
|
21 |
+
python3 -m venv .venv
|
22 |
+
source .venv/bin/activate
|
23 |
+
pip install -e .
|
24 |
+
```
|
25 |
+
|
26 |
+
## Usage
|
27 |
+
|
28 |
+
### CLI Prediction
|
29 |
+
```bash
|
30 |
+
# Local inference
|
31 |
+
qwen-clf predict "Your coding exercise text" --backend local
|
32 |
+
|
33 |
+
# HF Space inference
|
34 |
+
qwen-clf predict "Your text" --backend hf --hf-token YOUR_TOKEN
|
35 |
+
```
|
36 |
+
|
37 |
+
### Batch Evaluation
|
38 |
+
```bash
|
39 |
+
qwen-clf evaluate dataset.zip --backend local
|
40 |
+
```
|
41 |
+
|
42 |
+
### API Server
|
43 |
+
```bash
|
44 |
+
uvicorn app:app --host 0.0.0.0 --port 7860
|
45 |
+
```
|
46 |
+
|
47 |
+
#### API Endpoints
|
48 |
+
| Endpoint | Method | Description |
|
49 |
+
|----------|--------|-------------|
|
50 |
+
| `/` | GET | Documentation |
|
51 |
+
| `/predict` | POST | Single text prediction |
|
52 |
+
| `/evaluate` | POST | Batch evaluation (ZIP) |
|
53 |
+
| `/health` | GET | Service status |
|
54 |
+
|
55 |
+
## Docker Deployment
|
56 |
+
```bash
|
57 |
+
# Build with GPU support
|
58 |
+
docker build -t qwen-classifier .
|
59 |
+
|
60 |
+
# Run container
|
61 |
+
docker run -p 7860:7860 --gpus all qwen-classifier
|
62 |
+
```
|
63 |
+
|
64 |
+
## Project Structure
|
65 |
+
```
|
66 |
+
.
|
67 |
+
βββ app.py # FastAPI entry point
|
68 |
+
βββ Dockerfile # GPU-optimized container setup
|
69 |
+
βββ qwen_classifier/ # Core package
|
70 |
+
β βββ cli.py # Command line interface
|
71 |
+
β βββ model.py # Qwen classifier implementation
|
72 |
+
β βββ predict.py # Inference logic
|
73 |
+
β βββ evaluate.py # Batch evaluation
|
74 |
+
βββ requirements.txt # Python dependencies
|
75 |
+
```
|
76 |
+
|
77 |
+
## Configuration
|
78 |
+
Edit `qwen_classifier/config.py` to set:
|
79 |
+
- `TAG_NAMES`: List of 9 classification tags
|
80 |
+
- `HF_REPO`: Default Hugging Face model repo
|
81 |
+
- `DEVICE`: Auto-detected CUDA/CPU
|
82 |
+
|
83 |
+
## Hugging Face Space
|
84 |
+
Live demo:
|
85 |
+
[](https://huggingface.co/spaces/KeivanR/qwen-classifier-demo)
|
86 |
+
|
87 |
+
## License
|
88 |
+
Apache 2.0 Β© Keivan Razban
|