Spaces:
Running
Running
Hugging Face Spaces Deployment Guide
β Changes Made for Hugging Face Spaces Compatibility
1. Port Configuration
- Updated
backend/app.py: Server now readsPORTfrom environment variable (default: 7860)port = int(os.environ.get("PORT", 7860)) uvicorn.run(app, host="0.0.0.0", port=port) - Updated
Dockerfile: CMD uses${PORT:-7860}for dynamic port binding
2. Filesystem Permissions
- Changed output directory:
OUTPUT_DIRnow uses/tmp/outputsinstead of./outputs- Hugging Face Spaces containers have read-only
/appdirectory /tmpis writable for temporary files- Note: Files in
/tmpare ephemeral and lost on restart
- Hugging Face Spaces containers have read-only
3. Static File Serving
- Fixed sample image serving: Mounted
/cyto,/colpo,/histodirectories fromfrontend/dist - Added catch-all route: Serves static files (logos, banners) from dist root
- Frontend dist path fallback: Checks both
./frontend/dist(Docker) and../frontend/dist(local dev)
4. Frontend Configuration
- Frontend already configured: Uses
window.location.originin production, so API calls work on any domain - Vite build: Copies
public/contents todist/automatically
π Deployment Checklist
Step 1: Create Hugging Face Space
- Go to https://huggingface.co/spaces
- Click "Create new Space"
- Choose:
- Space SDK: Docker
- Hardware: CPU Basic (free) or GPU (for faster inference)
- Visibility: Public or Private
Step 2: Set Up Git LFS (for large model files)
Your project has large model files (.pt, .pth, .keras). Track them with Git LFS:
# Install Git LFS if not already installed
git lfs install
# Track model files
git lfs track "*.pt"
git lfs track "*.pth"
git lfs track "*.keras"
git lfs track "*.pkl"
# Commit .gitattributes
git add .gitattributes
git commit -m "Track model files with Git LFS"
Step 3: Configure Secrets (Optional)
If you want AI-generated summaries using Mistral, add a secret:
- Go to Space Settings β Variables and secrets
- Add new secret:
- Name:
HF_TOKEN - Value: Your Hugging Face token (from https://huggingface.co/settings/tokens)
- Name:
Step 4: Push Code to Space
# Add Space as remote
git remote add space https://huggingface.co/spaces/<YOUR_USERNAME>/<SPACE_NAME>
# Push to Space
git push space main
Step 5: Monitor Build
- Hugging Face will build the Docker image (this may take 10-20 minutes)
- Watch logs in the Space's "Logs" tab
- Once built, the Space will automatically start
π Troubleshooting
Build Issues
Problem: Docker build times out or fails
- Solution: Reduce image size by pinning lighter dependencies in
requirements.txt - Solution: Consider using pre-built wheels for TensorFlow/PyTorch
Problem: Model files not found
- Solution: Ensure Git LFS is configured and model files are committed
- Solution: Check that model paths in
backend/app.pymatch actual filenames
Runtime Issues
Problem: 404 errors for sample images
- Solution: Rebuild frontend:
cd frontend && npm run build - Solution: Verify
frontend/public/contents are copied todist/
Problem: Permission denied errors
- Solution: All writes should go to
/tmp/outputs(already fixed) - Solution: Never write to
/appdirectory
Problem: Port binding errors
- Solution: Use
$PORTenv var (already configured in Dockerfile and app.py)
Performance Issues
Problem: Slow startup or inference
- Solution: Models load at startup; consider lazy loading on first request
- Solution: Upgrade to GPU hardware tier for faster inference
- Solution: Add caching for model weights
π File Structure Expected in Space
/app/
βββ app.py # Main FastAPI app
βββ model.py, model_histo.py, etc. # Model definitions
βββ augmentations.py # Image preprocessing
βββ requirements.txt # Python dependencies
βββ best2.pt # YOLO cytology model
βββ MWTclass2.pth # MWT classifier
βββ yolo_colposcopy.pt # YOLO colposcopy model
βββ histopathology_trained_model.keras # Histopathology model
βββ logistic_regression_model.pkl # CIN classifier (optional)
βββ frontend/
βββ dist/ # Built frontend
βββ index.html
βββ assets/ # JS/CSS bundles
βββ cyto/ # Sample cytology images
βββ colpo/ # Sample colposcopy images
βββ histo/ # Sample histopathology images
βββ *.png, *.jpeg # Logos, banners
π Access Your Space
Once deployed, your app will be available at:
https://huggingface.co/spaces/<YOUR_USERNAME>/<SPACE_NAME>
The frontend serves at / and the API is accessible at:
POST /predict/- Run model inferencePOST /reports/- Generate medical reportsGET /health- Health checkGET /models- List available models
β οΈ Important Notes
Ephemeral Storage
- Files in
/tmp/outputsare lost on restart - For persistent reports, consider:
- Downloading immediately after generation
- Uploading to external storage (S3, Hugging Face Datasets)
- Using Persistent Storage (requires paid tier)
Model Loading Time
- All models load at startup (~30-60 seconds)
- First request after restart may be slower
- Consider implementing health check endpoint that waits for models
Resource Limits
- Free CPU tier: Limited RAM and CPU
- Models are memory-intensive (TensorFlow + PyTorch + YOLO)
- May need CPU Upgrade or GPU tier for production use
CORS
- Currently allows all origins (
allow_origins=["*"]) - For production, restrict to your Space domain
π Next Steps After Deployment
Test all three models:
- Upload cytology sample β Test YOLO detection
- Upload colposcopy sample β Test CIN classification
- Upload histopathology sample β Test breast cancer classification
Generate a test report:
- Run an analysis
- Fill out patient metadata
- Generate HTML/PDF report
- Verify download links work
Monitor performance:
- Check inference times
- Monitor memory usage in Space logs
- Consider upgrading hardware if needed
Share your Space:
- Add a README with usage instructions
- Include sample images in the repo
- Add citations for model papers
π Support
If you encounter issues:
- Check Space logs: Settings β Logs
- Verify all model files are present: Settings β Files
- Test locally with Docker:
docker build -t pathora . && docker run -p 7860:7860 pathora - Open an issue on Hugging Face Discuss: https://discuss.huggingface.co/
Deployment ready! π