File size: 12,973 Bytes
3a7a125
 
 
 
 
 
 
 
 
 
cff42a2
3a7a125
cff42a2
4502cec
8aac05c
3a7a125
cff42a2
3a7a125
 
 
 
 
cff42a2
3a7a125
cff42a2
3a7a125
cff42a2
3a7a125
cff42a2
 
0b6e76d
cff42a2
3a7a125
0b6e76d
 
 
 
 
 
 
 
cff42a2
2ed2bd7
 
 
 
 
56b5c90
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0b6e76d
 
cff42a2
3a7a125
 
0b6e76d
817d281
 
 
3a7a125
 
 
cff42a2
3a7a125
cff42a2
3a7a125
cff42a2
0b6e76d
 
4502cec
0b6e76d
 
 
 
6e48ad3
0b6e76d
 
d174be4
0b6e76d
 
 
 
 
2ed2bd7
 
 
 
 
 
 
 
 
0b6e76d
 
 
3a7a125
cff42a2
3a7a125
cff42a2
3a7a125
cff42a2
3a7a125
 
2043365
3a7a125
4502cec
3a7a125
cff42a2
 
3a7a125
d174be4
 
 
 
 
 
 
 
 
 
 
 
6e48ad3
d174be4
 
cff42a2
3a7a125
cff42a2
0b6e76d
 
 
4502cec
0b6e76d
 
d174be4
6e48ad3
d174be4
0b6e76d
d174be4
0b6e76d
2ed2bd7
 
 
 
 
 
 
0b6e76d
 
 
d174be4
 
6e48ad3
cff42a2
3a7a125
2043365
3a7a125
2043365
3a7a125
 
2043365
3a7a125
4502cec
2043365
 
3a7a125
2043365
3a7a125
 
2043365
3a7a125
 
cff42a2
 
3a7a125
cff42a2
0b6e76d
3a7a125
 
56b5c90
cff42a2
0b6e76d
3a7a125
56b5c90
3a7a125
 
 
0b6e76d
 
 
 
 
 
 
 
 
 
 
 
56b5c90
0b6e76d
 
 
 
 
 
56b5c90
0b6e76d
 
 
 
 
3a7a125
cff42a2
0b6e76d
 
 
 
 
 
cff42a2
 
2ed2bd7
f724bab
 
 
 
2ed2bd7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f724bab
2ed2bd7
 
f724bab
2ed2bd7
 
 
f724bab
2ed2bd7
 
 
 
 
 
 
 
f724bab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0b6e76d
 
 
 
 
56b5c90
0b6e76d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cff42a2
56b5c90
 
 
 
 
0b6e76d
 
 
56b5c90
 
3a7a125
0b6e76d
2ed2bd7
f724bab
2ed2bd7
 
 
f724bab
 
 
 
 
cff42a2
 
56b5c90
 
 
 
 
 
3a7a125
cff42a2
3a7a125
 
 
 
cff42a2
3a7a125
cff42a2
3a7a125
 
 
 
 
cff42a2
3a7a125
cff42a2
 
 
3a7a125
4502cec
3a7a125
 
cff42a2
3a7a125
 
cff42a2
3a7a125
cff42a2
3a7a125
cff42a2
3a7a125
cff42a2
 
3a7a125
 
 
 
cff42a2
 
 
56b5c90
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
---
title: Text Summarizer API
emoji: πŸ“
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
license: mit
app_port: 7860
---

# Text Summarizer API

A FastAPI-based text summarization service powered by Ollama and Mistral 7B model.

## πŸš€ Features

- **Fast text summarization** using local LLM inference
- **RESTful API** with FastAPI
- **Health monitoring** and logging
- **Docker containerized** for easy deployment
- **Free deployment** on Hugging Face Spaces

## πŸ“‘ API Endpoints

### Health Check
```
GET /health
```

### V1 API (Ollama + Transformers Pipeline)
```
POST /api/v1/summarize
POST /api/v1/summarize/stream
POST /api/v1/summarize/pipeline/stream
```

### V2 API (HuggingFace Streaming)
```
POST /api/v2/summarize/stream
```

### V3 API (Web Scraping + Summarization)
```
POST /api/v3/scrape-and-summarize/stream
```

## 🌐 Live Deployment

**βœ… Successfully deployed and tested on Hugging Face Spaces!**

- **Live Space:** https://colin730-SummarizerApp.hf.space
- **API Documentation:** https://colin730-SummarizerApp.hf.space/docs
- **Health Check:** https://colin730-SummarizerApp.hf.space/health
- **V2 Streaming API:** https://colin730-SummarizerApp.hf.space/api/v2/summarize/stream

### Quick Test
```bash
# Test the live deployment
curl https://colin730-SummarizerApp.hf.space/health
curl -X POST https://colin730-SummarizerApp.hf.space/api/v2/summarize/stream \
  -H "Content-Type: application/json" \
  -d '{"text":"This is a test of the live API.","max_tokens":50}'
```

**Request Format (V1 and V2 compatible):**
```json
{
  "text": "Your long text to summarize here...",
  "max_tokens": 256,
  "prompt": "Summarize the following text concisely:"
}
```

### API Documentation
- **Swagger UI**: `/docs`
- **ReDoc**: `/redoc`

## πŸ”§ Configuration

The service uses the following environment variables:

### V1 Configuration (Ollama)
- `OLLAMA_MODEL`: Model to use (default: `llama3.2:1b`)
- `OLLAMA_HOST`: Ollama service host (default: `http://localhost:11434`)
- `OLLAMA_TIMEOUT`: Request timeout in seconds (default: `60`)
- `ENABLE_V1_WARMUP`: Enable V1 warmup (default: `false`)

### V2 Configuration (HuggingFace)
- `HF_MODEL_ID`: HuggingFace model ID (default: `sshleifer/distilbart-cnn-6-6`)
- `HF_DEVICE_MAP`: Device mapping (default: `auto` for GPU fallback to CPU)
- `HF_TORCH_DTYPE`: Torch dtype (default: `auto`)
- `HF_HOME`: HuggingFace cache directory (default: `/tmp/huggingface`)
- `HF_MAX_NEW_TOKENS`: Max new tokens (default: `128`)
- `HF_TEMPERATURE`: Sampling temperature (default: `0.7`)
- `HF_TOP_P`: Nucleus sampling (default: `0.95`)
- `ENABLE_V2_WARMUP`: Enable V2 warmup (default: `true`)

### V3 Configuration (Web Scraping)
- `ENABLE_V3_SCRAPING`: Enable V3 API (default: `true`)
- `SCRAPING_TIMEOUT`: HTTP timeout for scraping (default: `10` seconds)
- `SCRAPING_MAX_TEXT_LENGTH`: Max text to extract (default: `50000` chars)
- `SCRAPING_CACHE_ENABLED`: Enable caching (default: `true`)
- `SCRAPING_CACHE_TTL`: Cache TTL (default: `3600` seconds / 1 hour)
- `SCRAPING_UA_ROTATION`: Enable user-agent rotation (default: `true`)
- `SCRAPING_RATE_LIMIT_PER_MINUTE`: Rate limit per IP (default: `10`)

### Server Configuration
- `SERVER_HOST`: Server host (default: `127.0.0.1`)
- `SERVER_PORT`: Server port (default: `8000`)
- `LOG_LEVEL`: Logging level (default: `INFO`)

## 🐳 Docker Deployment

### Local Development
```bash
# Build and run with docker-compose
docker-compose up --build

# Or run directly
docker build -f Dockerfile.hf -t summarizer-app .
docker run -p 7860:7860 summarizer-app
```

### Hugging Face Spaces
This app is optimized for deployment on Hugging Face Spaces using Docker SDK.

**V2-Only Deployment on HF Spaces:**
- Uses `t5-small` model (~250MB) for fast startup
- No Ollama dependency (saves memory and disk space)
- Model downloads during warmup for instant first request
- Optimized for free tier resource limits

**Environment Variables for HF Spaces:**
```bash
ENABLE_V1_WARMUP=false
ENABLE_V2_WARMUP=true
HF_MODEL_ID=sshleifer/distilbart-cnn-6-6
HF_HOME=/tmp/huggingface
```

## πŸ“Š Performance

### V1 (Ollama + Transformers Pipeline)
- **V1 Models**: llama3.2:1b (Ollama) + distilbart-cnn-6-6 (Transformers)
- **Memory usage**: ~2-4GB RAM (when V1 warmup enabled)
- **Inference speed**: ~2-5 seconds per request
- **Startup time**: ~30-60 seconds (when V1 warmup enabled)

### V2 (HuggingFace Streaming) - Primary on HF Spaces
- **V2 Model**: sshleifer/distilbart-cnn-6-6 (~300MB download)
- **Memory usage**: ~500MB RAM (when V2 warmup enabled)
- **Inference speed**: Real-time token streaming
- **Startup time**: ~30-60 seconds (includes model download when V2 warmup enabled)

### V3 (Web Scraping + Summarization)
- **Dependencies**: trafilatura, httpx, lxml (lightweight, no JavaScript rendering)
- **Memory usage**: ~550MB RAM (V2 + scraping: +10-50MB)
- **Scraping speed**: 200-500ms typical, <10ms on cache hit
- **Total latency**: 2-5 seconds (scrape + summarize)
- **Success rate**: 95%+ article extraction

### Memory Optimization
- **V1 warmup disabled by default** (`ENABLE_V1_WARMUP=false`)
- **V2 warmup enabled by default** (`ENABLE_V2_WARMUP=true`)
- **HuggingFace Spaces**: V2-only deployment (no Ollama)
- **Local development**: V1 endpoints work if Ollama is running externally
- **distilbart-cnn-6-6 model**: Optimized for HuggingFace Spaces free tier with CNN/DailyMail fine-tuning

## πŸ› οΈ Development

### Setup
```bash
# Install dependencies
pip install -r requirements.txt

# Run locally
uvicorn app.main:app --host 0.0.0.0 --port 7860
```

### Testing
```bash
# Run tests
pytest

# Run with coverage
pytest --cov=app
```

## πŸ“ Usage Examples

### V1 API (Ollama)
```python
import requests
import json

# V1 streaming summarization
response = requests.post(
    "https://colin730-SummarizerApp.hf.space/api/v1/summarize/stream",
    json={
        "text": "Your long article or text here...",
        "max_tokens": 256
    },
    stream=True
)

for line in response.iter_lines():
    if line.startswith(b'data: '):
        data = json.loads(line[6:])
        print(data["content"], end="")
        if data["done"]:
            break
```

### V2 API (HuggingFace Streaming) - Recommended
```python
import requests
import json

# V2 streaming summarization (same request format as V1)
response = requests.post(
    "https://colin730-SummarizerApp.hf.space/api/v2/summarize/stream",
    json={
        "text": "Your long article or text here...",
        "max_tokens": 128  # V2 uses max_new_tokens
    },
    stream=True
)

for line in response.iter_lines():
    if line.startswith(b'data: '):
        data = json.loads(line[6:])
        print(data["content"], end="")
        if data["done"]:
            break
```

### V3 API (Web Scraping + Summarization) - Android App Primary Use Case

**V3 supports two modes: URL scraping or direct text summarization**

#### Mode 1: URL Scraping (recommended for articles)
```python
import requests
import json

# V3 scrape article from URL and stream summarization
response = requests.post(
    "https://colin730-SummarizerApp.hf.space/api/v3/scrape-and-summarize/stream",
    json={
        "url": "https://example.com/article",
        "max_tokens": 256,
        "include_metadata": True,  # Get article title, author, etc.
        "use_cache": True  # Use cached content if available
    },
    stream=True
)

for line in response.iter_lines():
    if line.startswith(b'data: '):
        data = json.loads(line[6:])

        # First event: metadata
        if data.get("type") == "metadata":
            print(f"Input type: {data['data']['input_type']}")  # 'url'
            print(f"Title: {data['data']['title']}")
            print(f"Author: {data['data']['author']}")
            print(f"Scrape time: {data['data']['scrape_latency_ms']}ms\n")

        # Content events
        elif "content" in data:
            print(data["content"], end="")
            if data["done"]:
                print(f"\n\nTotal time: {data['latency_ms']}ms")
                break
```

#### Mode 2: Direct Text Summarization (fallback when scraping fails)
```python
import requests
import json

# V3 direct text summarization (no scraping)
response = requests.post(
    "https://colin730-SummarizerApp.hf.space/api/v3/scrape-and-summarize/stream",
    json={
        "text": "Your article text here... (minimum 50 characters)",
        "max_tokens": 256,
        "include_metadata": True
    },
    stream=True
)

for line in response.iter_lines():
    if line.startswith(b'data: '):
        data = json.loads(line[6:])

        # First event: metadata
        if data.get("type") == "metadata":
            print(f"Input type: {data['data']['input_type']}")  # 'text'
            print(f"Text length: {data['data']['text_length']} chars\n")

        # Content events
        elif "content" in data:
            print(data["content"], end="")
            if data["done"]:
                break
```

**Note:** Provide either `url` OR `text`, not both. Text mode is useful as a fallback when:
- Article is behind a paywall
- Website blocks scrapers
- User has already extracted the text manually

### Android Client (SSE)
```kotlin
// Android SSE client example
val client = OkHttpClient()
val request = Request.Builder()
    .url("https://colin730-SummarizerApp.hf.space/api/v2/summarize/stream")
    .post(RequestBody.create(
        MediaType.parse("application/json"),
        """{"text": "Your text...", "max_tokens": 128}"""
    ))
    .build()

client.newCall(request).enqueue(object : Callback {
    override fun onResponse(call: Call, response: Response) {
        val source = response.body()?.source()
        source?.use { bufferedSource ->
            while (true) {
                val line = bufferedSource.readUtf8Line()
                if (line?.startsWith("data: ") == true) {
                    val json = line.substring(6)
                    val data = Gson().fromJson(json, Map::class.java)
                    // Update UI with data["content"]
                    if (data["done"] == true) break
                }
            }
        }
    }
})
```

### cURL Examples
```bash
# Test live deployment
curl https://colin730-SummarizerApp.hf.space/health

# V1 API (if Ollama is available)
curl -X POST "https://colin730-SummarizerApp.hf.space/api/v1/summarize/stream" \
  -H "Content-Type: application/json" \
  -d '{"text": "Your text...", "max_tokens": 256}'

# V2 API (HuggingFace streaming - recommended)
curl -X POST "https://colin730-SummarizerApp.hf.space/api/v2/summarize/stream" \
  -H "Content-Type: application/json" \
  -d '{"text": "Your text...", "max_tokens": 128}'

# V3 API - URL mode (web scraping + summarization)
curl -X POST "https://colin730-SummarizerApp.hf.space/api/v3/scrape-and-summarize/stream" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com/article", "max_tokens": 256, "include_metadata": true}'

# V3 API - Text mode (direct summarization, no scraping)
curl -X POST "https://colin730-SummarizerApp.hf.space/api/v3/scrape-and-summarize/stream" \
  -H "Content-Type: application/json" \
  -d '{"text": "Your article text here (minimum 50 characters)...", "max_tokens": 256}'
```

### Test Script
```bash
# Use the included test script
./scripts/test_endpoints.sh https://colin730-SummarizerApp.hf.space
```

## πŸ”’ Security

- Non-root user execution
- Input validation and sanitization
- Rate limiting (configurable)
- API key authentication (optional)

## πŸ“ˆ Monitoring

The service includes:
- Health check endpoint
- Request logging
- Error tracking
- Performance metrics

## πŸ†˜ Troubleshooting

### Common Issues

1. **Model not loading**: Check if Ollama is running and model is pulled
2. **Out of memory**: Ensure sufficient RAM (8GB+) for Mistral 7B
3. **Slow startup**: Normal on first run due to model download
4. **API errors**: Check logs via `/docs` endpoint

### Logs
View application logs in the Hugging Face Spaces interface or check the health endpoint for service status.

## πŸ“„ License

MIT License - see LICENSE file for details.

## 🀝 Contributing

1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests
5. Submit a pull request

---

## βœ… Deployment Status

**Successfully deployed and tested on Hugging Face Spaces!** πŸš€

- βœ… **Proxy-aware FastAPI** with `root_path` support
- βœ… **All endpoints working** (health, docs, V2 API)
- βœ… **Real-time streaming** summarization
- βœ… **No 404 errors** - all paths correctly configured
- βœ… **Test script included** for easy verification

### Recent Fixes Applied
- Added `root_path=os.getenv("HF_SPACE_ROOT_PATH", "")` for HF Spaces proxy awareness
- Ensured binding to `0.0.0.0:7860` as required by HF Spaces
- Verified V2 router paths (`/api/v2/summarize/stream`) with no double prefixes
- Created test script for external endpoint verification

**Live Space:** https://colin730-SummarizerApp.hf.space 🎯