Whisper Streaming Web: Real-time Speech-to-Text with Web UI & FastAPI WebSocket
This fork of Whisper Streaming adds a ready-to-use HTML interface, making it super easy to start transcribing audio directly from your browser. Just launch the local server, allow microphone access, and start streaming. Everything runs locally on your machine ποΈβ¨
What's New?
π Web & API
β
Built-in Web UI β No frontend setup needed, just open your browser and start transcribing.
β
FastAPI WebSocket Server β Real-time STT processing with async FFmpeg streaming.
β
JavaScript Client β A ready-to-use MediaRecorder implementation that can be copied on your client side.
βοΈ Core Improvements
β
Buffering Preview β Displays unvalidated transcription segments for better feedback.
β
Multi-User Support β Handle multiple users simultaneously without conflicts.
β
MLX Whisper Backend β Optimized for Apple Silicon for faster local processing.
β
Enhanced Sentence Segmentation β Better buffer trimming for better accuracy across languages.
β
Extended Logging β More detailed logs to improve debugging and monitoring.
π₯ Advanced Features
β Real-Time Diarization (Beta) β Assigns speaker labels dynamically using Diart.
Web UI
Installation
Clone the Repository:
git clone https://github.com/QuentinFuxa/whisper_streaming_web cd whisper_streaming_web
How to Launch the Server
- Dependencies:
Install required dependences :
# Whisper streaming required dependencies pip install librosa soundfile # Whisper streaming web required dependencies pip install fastapi ffmpeg-pythonInstall at least one whisper backend among:
whisper whisper-timestamped faster-whisper (faster backend on NVIDIA GPU) mlx-whisper (faster backend on Apple Silicon)Optionnal dependencies
# If you want to use VAC (Voice Activity Controller). Useful for preventing hallucinations torch # If you choose sentences as buffer trimming strategy mosestokenizer wtpsplit tokenize_uk # If you work with Ukrainian text # If you want to run the server using uvicorn (recommended) uvicorn # If you want to use diarization diart
Run the FastAPI Server:
python whisper_fastapi_online_server.py --host 0.0.0.0 --port 8000--hostand--portlet you specify the serverβs IP/port.-min-chunk-sizesets the minimum chunk size for audio processing. Make sure this value aligns with the chunk size selected in the frontend. If not aligned, the system will work but may unnecessarily over-process audio data.- For a full list of configurable options, run
python whisper_fastapi_online_server.py -h --transcription, default to True. Change to False if you want to run only diarization--diarization, default to False, let you choose whether or not you want to run diarization in parallel- For other parameters, look at whisper streaming readme.
Open the Provided HTML:
- By default, the server root endpoint
/serves a simplelive_transcription.htmlpage. - Open your browser at
http://localhost:8000(or replacelocalhostand8000with whatever you specified). - The page uses vanilla JavaScript and the WebSocket API to capture your microphone and stream audio to the server in real time.
- By default, the server root endpoint
How the Live Interface Works
- Once you allow microphone access, the page records small chunks of audio using the MediaRecorder API in webm/opus format.
- These chunks are sent over a WebSocket to the FastAPI endpoint at
/asr. - The Python server decodes
.webmchunks on the fly using FFmpeg and streams them into the whisper streaming implementation for transcription. - Partial transcription appears as soon as enough audio is processed. The βunvalidatedβ text is shown in lighter or grey color (i.e., an βaperΓ§uβ) to indicate itβs still buffered partial output. Once Whisper finalizes that segment, itβs displayed in normal text.
- You can watch the transcription update in near real time, ideal for demos, prototyping, or quick debugging.
Deploying to a Remote Server
If you want to deploy this setup:
- Host the FastAPI app behind a production-grade HTTP(S) server (like Uvicorn + Nginx or Docker). If you use HTTPS, use "wss" instead of "ws" in WebSocket URL.
- The HTML/JS page can be served by the same FastAPI app or a separate static host.
- Users open the page in Chrome/Firefox (any modern browser that supports MediaRecorder + WebSocket).
No additional front-end libraries or frameworks are required. The WebSocket logic in live_transcription.html is minimal enough to adapt for your own custom UI or embed in other pages.
Acknowledgments
This project builds upon the foundational work of the Whisper Streaming project. We extend our gratitude to the original authors for their contributions.