Spaces:
Running
Running
| # Hugging Face Spaces Deployment Guide | |
| ## β Changes Made for Hugging Face Spaces Compatibility | |
| ### 1. Port Configuration | |
| - **Updated `backend/app.py`**: Server now reads `PORT` from environment variable (default: 7860) | |
| ```python | |
| port = int(os.environ.get("PORT", 7860)) | |
| uvicorn.run(app, host="0.0.0.0", port=port) | |
| ``` | |
| - **Updated `Dockerfile`**: CMD uses `${PORT:-7860}` for dynamic port binding | |
| ### 2. Filesystem Permissions | |
| - **Changed output directory**: `OUTPUT_DIR` now uses `/tmp/outputs` instead of `./outputs` | |
| - Hugging Face Spaces containers have read-only `/app` directory | |
| - `/tmp` is writable for temporary files | |
| - **Note**: Files in `/tmp` are ephemeral and lost on restart | |
| ### 3. Static File Serving | |
| - **Fixed sample image serving**: Mounted `/cyto`, `/colpo`, `/histo` directories from `frontend/dist` | |
| - **Added catch-all route**: Serves static files (logos, banners) from dist root | |
| - **Frontend dist path fallback**: Checks both `./frontend/dist` (Docker) and `../frontend/dist` (local dev) | |
| ### 4. Frontend Configuration | |
| - **Frontend already configured**: Uses `window.location.origin` in production, so API calls work on any domain | |
| - **Vite build**: Copies `public/` contents to `dist/` automatically | |
| --- | |
| ## π Deployment Checklist | |
| ### Step 1: Create Hugging Face Space | |
| 1. Go to https://huggingface.co/spaces | |
| 2. Click **"Create new Space"** | |
| 3. Choose: | |
| - **Space SDK**: Docker | |
| - **Hardware**: CPU Basic (free) or GPU (for faster inference) | |
| - **Visibility**: Public or Private | |
| ### Step 2: Set Up Git LFS (for large model files) | |
| Your project has large model files (`.pt`, `.pth`, `.keras`). Track them with Git LFS: | |
| ```bash | |
| # Install Git LFS if not already installed | |
| git lfs install | |
| # Track model files | |
| git lfs track "*.pt" | |
| git lfs track "*.pth" | |
| git lfs track "*.keras" | |
| git lfs track "*.pkl" | |
| # Commit .gitattributes | |
| git add .gitattributes | |
| git commit -m "Track model files with Git LFS" | |
| ``` | |
| ### Step 3: Configure Secrets (Optional) | |
| If you want AI-generated summaries using Mistral, add a secret: | |
| 1. Go to Space Settings β Variables and secrets | |
| 2. Add new secret: | |
| - Name: `HF_TOKEN` | |
| - Value: Your Hugging Face token (from https://huggingface.co/settings/tokens) | |
| ### Step 4: Push Code to Space | |
| ```bash | |
| # Add Space as remote | |
| git remote add space https://huggingface.co/spaces/<YOUR_USERNAME>/<SPACE_NAME> | |
| # Push to Space | |
| git push space main | |
| ``` | |
| ### Step 5: Monitor Build | |
| - Hugging Face will build the Docker image (this may take 10-20 minutes) | |
| - Watch logs in the Space's "Logs" tab | |
| - Once built, the Space will automatically start | |
| --- | |
| ## π Troubleshooting | |
| ### Build Issues | |
| **Problem**: Docker build times out or fails | |
| - **Solution**: Reduce image size by pinning lighter dependencies in `requirements.txt` | |
| - **Solution**: Consider using pre-built wheels for TensorFlow/PyTorch | |
| **Problem**: Model files not found | |
| - **Solution**: Ensure Git LFS is configured and model files are committed | |
| - **Solution**: Check that model paths in `backend/app.py` match actual filenames | |
| ### Runtime Issues | |
| **Problem**: 404 errors for sample images | |
| - **Solution**: Rebuild frontend: `cd frontend && npm run build` | |
| - **Solution**: Verify `frontend/public/` contents are copied to `dist/` | |
| **Problem**: Permission denied errors | |
| - **Solution**: All writes should go to `/tmp/outputs` (already fixed) | |
| - **Solution**: Never write to `/app` directory | |
| **Problem**: Port binding errors | |
| - **Solution**: Use `$PORT` env var (already configured in Dockerfile and app.py) | |
| ### Performance Issues | |
| **Problem**: Slow startup or inference | |
| - **Solution**: Models load at startup; consider lazy loading on first request | |
| - **Solution**: Upgrade to GPU hardware tier for faster inference | |
| - **Solution**: Add caching for model weights | |
| --- | |
| ## π File Structure Expected in Space | |
| ``` | |
| /app/ | |
| βββ app.py # Main FastAPI app | |
| βββ model.py, model_histo.py, etc. # Model definitions | |
| βββ augmentations.py # Image preprocessing | |
| βββ requirements.txt # Python dependencies | |
| βββ best2.pt # YOLO cytology model | |
| βββ MWTclass2.pth # MWT classifier | |
| βββ yolo_colposcopy.pt # YOLO colposcopy model | |
| βββ histopathology_trained_model.keras # Histopathology model | |
| βββ logistic_regression_model.pkl # CIN classifier (optional) | |
| βββ frontend/ | |
| βββ dist/ # Built frontend | |
| βββ index.html | |
| βββ assets/ # JS/CSS bundles | |
| βββ cyto/ # Sample cytology images | |
| βββ colpo/ # Sample colposcopy images | |
| βββ histo/ # Sample histopathology images | |
| βββ *.png, *.jpeg # Logos, banners | |
| ``` | |
| --- | |
| ## π Access Your Space | |
| Once deployed, your app will be available at: | |
| ``` | |
| https://huggingface.co/spaces/<YOUR_USERNAME>/<SPACE_NAME> | |
| ``` | |
| The frontend serves at `/` and the API is accessible at: | |
| - `POST /predict/` - Run model inference | |
| - `POST /reports/` - Generate medical reports | |
| - `GET /health` - Health check | |
| - `GET /models` - List available models | |
| --- | |
| ## β οΈ Important Notes | |
| ### Ephemeral Storage | |
| - Files in `/tmp/outputs` are **lost on restart** | |
| - For persistent reports, consider: | |
| - Downloading immediately after generation | |
| - Uploading to external storage (S3, Hugging Face Datasets) | |
| - Using Persistent Storage (requires paid tier) | |
| ### Model Loading Time | |
| - All models load at startup (~30-60 seconds) | |
| - First request after restart may be slower | |
| - Consider implementing health check endpoint that waits for models | |
| ### Resource Limits | |
| - Free CPU tier: Limited RAM and CPU | |
| - Models are memory-intensive (TensorFlow + PyTorch + YOLO) | |
| - May need **CPU Upgrade** or **GPU** tier for production use | |
| ### CORS | |
| - Currently allows all origins (`allow_origins=["*"]`) | |
| - For production, restrict to your Space domain | |
| --- | |
| ## π Next Steps After Deployment | |
| 1. **Test all three models**: | |
| - Upload cytology sample β Test YOLO detection | |
| - Upload colposcopy sample β Test CIN classification | |
| - Upload histopathology sample β Test breast cancer classification | |
| 2. **Generate a test report**: | |
| - Run an analysis | |
| - Fill out patient metadata | |
| - Generate HTML/PDF report | |
| - Verify download links work | |
| 3. **Monitor performance**: | |
| - Check inference times | |
| - Monitor memory usage in Space logs | |
| - Consider upgrading hardware if needed | |
| 4. **Share your Space**: | |
| - Add a README with usage instructions | |
| - Include sample images in the repo | |
| - Add citations for model papers | |
| --- | |
| ## π Support | |
| If you encounter issues: | |
| 1. Check Space logs: Settings β Logs | |
| 2. Verify all model files are present: Settings β Files | |
| 3. Test locally with Docker: `docker build -t pathora . && docker run -p 7860:7860 pathora` | |
| 4. Open an issue on Hugging Face Discuss: https://discuss.huggingface.co/ | |
| --- | |
| **Deployment ready! π** | |