Spaces:
Build error
Build error
| title: Speech2Braille | |
| emoji: π | |
| colorFrom: yellow | |
| colorTo: red | |
| sdk: gradio | |
| sdk_version: 3.11.0 | |
| app_file: app.py | |
| pinned: false | |
| license: mit | |
| # Speech to Braille translator | |
| ## Overview | |
| This project can be found [here](https://github.com/Azuremis/speech_to_braille_translator) on my github. | |
| The app currently translates English speech into Braille. | |
| The speech-text translation is done using OpenAI's [Whisper](https://openai.com/blog/whisper/) [model](https://github.com/openai/whisper). The text-braille translation is done using | |
| the [PyBraille](https://pypi.org/project/pybraille/) library. | |
| ## How to use | |
| 1. Go to the [demo](https://huggingface.co/spaces/Azuremis/Speech2Braille) page | |
| 2. Click the record button and accept the browswer request for microphone access | |
| 3. Speak a few sentences then click the stop button when done. | |
| 4. Click the submit button to see both the English trascript and the Braille translation | |
| ## Disclaimer | |
| This is a portfolio project so I don't advise using it for any serious purpose! | |
| ## Credits | |
| Thank you to [Dave](https://twitter.com/azezezaaa), I used his [RememberThis](https://github.com/azezezaaa/rememberthis_OpenAIWhisperHackathon) code to help me grok | |
| how to use the OpenAI's whisper. I also found LabLab's Whisper [guides](https://lablab.ai/tech/whisper) really helpful. | |
| You can check out the configuration reference [here](https://huggingface.co/docs/hub/spaces-config-reference) | |