File size: 2,326 Bytes
b4296bf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
# Qwen Multi-label Text Classifier

## Overview
A multi-label text classifier based on Qwen-1.5B, fine-tuned for coding exercise classification. Supports:
- Local CPU/GPU inference
- Hugging Face API deployment
- Batch evaluation
- REST API via FastAPI
- Docker deployment

## Features
- **9 Label Classification**: Predicts multiple tags per text
- **CLI Interface**: Run predictions/evaluations from terminal
- **Dual Backend**: Choose between local or HF inference
- **GPU Optimized**: CUDA support via Docker

## Installation
```bash
git clone https://github.com/your-username/qwen-classifier
cd qwen-classifier
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
```

## Usage

### CLI Prediction
```bash
# Local inference
qwen-clf predict "Your coding exercise text" --backend local

# HF Space inference 
qwen-clf predict "Your text" --backend hf --hf-token YOUR_TOKEN
```

### Batch Evaluation
```bash
qwen-clf evaluate dataset.zip --backend local
```

### API Server
```bash
uvicorn app:app --host 0.0.0.0 --port 7860
```

#### API Endpoints
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/` | GET | Documentation |
| `/predict` | POST | Single text prediction |
| `/evaluate` | POST | Batch evaluation (ZIP) |
| `/health` | GET | Service status |

## Docker Deployment
```bash
# Build with GPU support
docker build -t qwen-classifier .

# Run container
docker run -p 7860:7860 --gpus all qwen-classifier
```

## Project Structure
```
.
β”œβ”€β”€ app.py               # FastAPI entry point
β”œβ”€β”€ Dockerfile           # GPU-optimized container setup
β”œβ”€β”€ qwen_classifier/     # Core package
β”‚   β”œβ”€β”€ cli.py          # Command line interface
β”‚   β”œβ”€β”€ model.py        # Qwen classifier implementation  
β”‚   β”œβ”€β”€ predict.py      # Inference logic
β”‚   └── evaluate.py     # Batch evaluation
└── requirements.txt    # Python dependencies
```

## Configuration
Edit `qwen_classifier/config.py` to set:
- `TAG_NAMES`: List of 9 classification tags
- `HF_REPO`: Default Hugging Face model repo
- `DEVICE`: Auto-detected CUDA/CPU

## Hugging Face Space
Live demo:  
[![HF Space](https://img.shields.io/badge/πŸ€—%20Hugging%20Face-Space-blue)](https://huggingface.co./spaces/KeivanR/qwen-classifier-demo)

## License
Apache 2.0 Β© Keivan Razban