Spaces:
Sleeping
title: Denoiser-Server
emoji: π
colorFrom: indigo
colorTo: purple
sdk: docker
app_port: 8080
app_file: Dockerfile
pinned: false
#Image Denoising using UNET and it's variants A Deep Learning approach to remove noise from images.
##1. Project Problem Statement: The Critical Need for Image Denoising Digital images are indispensable data sources across numerous high-stakes industries, yet they are universally susceptible to noise corruption introduced during acquisition, transmission, or processing. This noise, whether Gaussian, Poisson, or Speckle, degrades images in two critical ways:
Impairing Human Perception: Noise obscures subtle features and textures, significantly lowering the visual fidelity required for accurate human interpretation.
Compromising Machine Reliability: Noise introduces spurious data points that confuse downstream Computer Vision tasks, drastically reducing the accuracy of algorithms used for analysis and automation.
The challenge is magnified across essential fields:
In Medical Imaging (e.g., MRI, CT), noise threatens the ability to identify critical, life-saving diagnostic features.
In Industrial Quality Control, noise leads to costly false positives or false negatives during automated inspection.
In Remote Sensing and Astronomy, noise prevents the reliable extraction of scientific data from satellite and telescopic imagery.
The objective of this project is to develop and evaluate a robust image denoising solution capable of effectively suppressing varied noise types while preserving crucial structural details, thereby elevating the reliability and precision of visual data for both human experts and advanced Machine Learning systems.
##2. Few Results
##3. Project Structure
βββ .gitignore
βββ .gitattributes
βββ .github/workflows
β βββ sync_to_hf.yml
βββ Dockerfile
βββ api-test.py
βββ handler.py
βββ requirements.txt
βββ images
βββ README.md
- Dataset Preparation
- UNET_training
- Residual-UNET_training
- CBAM-Residual-UNET_training
- TorchScript_comparison - Model Archiving
##4. Dataset Used
The model was trained on an augmented dataset of 32,000 clean/noisy patch pairs derived from the BSD500 dataset, utilizing a 128Γ128 patch size with dynamic D4 geometric augmentation. To ensure robustness against real-world degradation, we employed a hybrid noise model incorporating four components:
Mixed Sensor Noise: A combination of Additive White Gaussian Noise (Οstd β[0,30]) and Signal-Dependent Poisson Noise (aβ[0,0.05]).
Impulse Noise: Sparse Salt-and-Pepper noise (β[0.001,0.005]).
Structured Artifacts: JPEG compression with randomized quality (β[70,95]).
Due to the complex, non-linear nature of this hybrid noise model, we quantified the overall degradation using the Effective Noise Level (Ο eff), defined as the Standard Deviation of the entire noise residual (yβx) across the validation set. The measured effective noise level for the challenging dataset was Οeff =79.32 (scaled to 0-255). All performance metrics (PSNR, SSIM) presented below are reported against this highly degraded baseline.
##5. Model Architectures
| Model | Description | Key Features |
|---|---|---|
| U-Net | Baseline architecture for image-to-image restoration | Encoder-decoder skip connections |
| Residual U-Net | Adds residual blocks to improve feature flow | Residual connections within U-net blocks |
| Residual U-Net + CBAM | Incorporates Convolutional Block Attention Module | focuses noise removal on key locations |
##6. Training Setup
| Platform | Purpose | Notes |
|---|---|---|
| Google Colab | Dataset prep + initial testing | Limited GPU runtime |
| Kaggle | Model training | Used for high-performance GPUs |
| Google Drive | Model & dataset storage | For cross-platform access |
##7. Optimization Comparing ordinary serlialization vs TorchScript inference time
| Model | Speedup |
|---|---|
| U-Net | 39.18 % |
| Residual U-Net | 43.77 % |
| Attention Residual U-Net | 30.72 % |
##8. Deployment (Backend)
Backend Framework: TorchServe
Containerization: Docker
Deployment Platform: Hugging Face Spaces
HuggingFace Space link: here
Artifacts: .mar model files stored here
##8. Frontend (Next.js)
Repo: Frontend Repo Link
Platform: Vercel
Provides a simple web interface for uploading noisy images and visualizing denoised outputs.
Open Web Frontend here
##9. Results:
| Model | PSNR | SSIM | Notes |
|---|---|---|---|
| U-Net | 28.7583 | 0.8444 | Baseline |
| Residual U-Net | 28.7630 | 0.8415 | Better texture recovery |
| Residual U-Net + CBAM | 29.0086 | 0.8485 | Best performance |
##10. References
[1] U-Net: Convolutional Networks for Biomedical Image Segmentation
[2] Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation
[3] Layer Normalization
CBAM: Convolutional Block Attention Module
[4] Attention-based UNet enabled Lightweight Image Semantic Communication System over Internet of Things
[5] Application of ResUNet-CBAM in Thin-Section Image Segmentation of Rocks
##11. Author
Rajeev Ahirwar
