giyos1212 commited on
Commit
a2c2cbb
Β·
verified Β·
1 Parent(s): 98b6d67

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +125 -115
README.md CHANGED
@@ -1,115 +1,125 @@
1
- <<<<<<< HEAD
2
- ---
3
- title: AI Operator
4
- emoji: πŸ¦€
5
- colorFrom: pink
6
- colorTo: yellow
7
- sdk: docker
8
- pinned: false
9
- license: mit
10
- short_description: 'Agentic AI which is specialised answering emergency calls '
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
14
- =======
15
- Help.me - AI-Powered Emergency Medical Assistance System
16
-
17
- "Help.me" is an AI-based platform designed to automate and accelerate communication between patients in need of urgent medical care and dispatch operators. The system receives voice messages from patients, analyzes their condition using artificial intelligence, and immediately transmits the information to a dispatcher dashboard for appropriate action.
18
-
19
- πŸš€ Key Features
20
-
21
- For Patients (Voice-First Interface):
22
-
23
- Voice Communication: Patients can report their symptoms simply by speaking.
24
-
25
- Multilingual System: The AI can communicate with patients in Uzbek, Russian, and English.
26
-
27
- Smart Recommendations: If a patient's condition is assessed as "Green" (non-urgent), the system recommends public polyclinics or private clinics based on their symptoms.
28
-
29
- Real-time Response: The AI analyzes the request instantly and provides a voice response.
30
-
31
- Simplified Interface: The interface is designed to be as simple as possible and voice-focused to avoid distracting the patient in stressful situations.
32
-
33
- For Dispatchers (Monitoring Dashboard):
34
-
35
- Real-time Monitoring: All incoming cases are displayed live on the dashboard.
36
-
37
- Risk Triage: The AI categorizes each case into "Red" (emergency), "Yellow" (uncertain), or "Green" (clinic referral) risk levels.
38
-
39
- Interactive Map: The locations of all ambulance brigades and clinics are tracked on a map in real time.
40
-
41
- Statistics & Analytics: Statistical data on cases and brigades are visualized in charts.
42
-
43
- 🧠 AI Models Used
44
-
45
- Our system relies on three core AI models:
46
-
47
- Speech-to-Text (STT):
48
-
49
- Model: A custom model fine-tuned on top of OpenAI Whisper (medium).
50
-
51
- Dataset: The model was trained on several datasets tailored to the conditions of Uzbekistan. This includes audio recordings in the Tashkent dialect, standard literary language, and additionally, the Khorezm dialect. This ensures high accuracy in understanding the speech of patients from various regions of the country.
52
-
53
- Logic and Response Generation (LLM):
54
-
55
- Model: Google Gemini Flash.
56
-
57
- Task: To analyze the transcribed complaints from the patient, determine the severity of the situation (risk level), and formulate the text of the response. The model is guided by a strict set of rules and action sequences provided via a SYSTEM_INSTRUCTION.
58
-
59
- Text-to-Speech (TTS):
60
-
61
- Model: Facebook MMS (Massively Multilingual Speech).
62
-
63
- Task: To synthesize the AI-generated response text into a natural-sounding human voice. The system uses separate TTS models for Uzbek and English.
64
-
65
- πŸ› οΈ Technology Stack
66
-
67
- Backend: FastAPI (Python)
68
-
69
- Real-time Communication: WebSockets
70
-
71
- Database: JSON-based flat files (for MVP)
72
-
73
- Frontend: HTML, CSS, JavaScript (Vanilla JS)
74
-
75
- Map: Leaflet.js, Charts: Chart.js
76
-
77
- βš™οΈ Getting Started
78
-
79
- 1. Prerequisites:
80
-
81
- Python 3.9+
82
-
83
- FFmpeg (must be installed on the system to process audio files)
84
-
85
- git
86
-
87
-
88
-
89
- 3. Set Up Virtual Environment and Install Dependencies:
90
-
91
- # Create a virtual environment
92
- python -m venv venv
93
-
94
- # Activate it (Windows)
95
- venv\Scripts\activate
96
-
97
- # Activate it (MacOS/Linux)
98
- source venv/bin/activate
99
-
100
- # Install the required libraries
101
- pip install -r requirements.txt
102
-
103
-
104
- 4. ‼️ IMPORTANT: Download AI Models
105
-
106
- This repository DOES NOT include the large AI models in the local_models directory. To run the system, you must download them separately and place them in the project folder.
107
-
108
- Note for the judges: Due to their large size (several GB), it was not feasible to upload the models to GitHub. The repository contains only the project's source code.
109
-
110
- 5. Run the Application:
111
-
112
- uvicorn app.main:app --reload
113
-
114
-
115
- >>>>>>> 0f59686 (Loyiha tayyor: Help.me AI tizimi)
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Help.me AI Operator
3
+ emoji: πŸš‘
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: docker
7
+ app_port: 7860
8
+ pinned: false
9
+ ---
10
+
11
+ <<<<<<< HEAD
12
+ ---
13
+ title: AI Operator
14
+ emoji: πŸ¦€
15
+ colorFrom: pink
16
+ colorTo: yellow
17
+ sdk: docker
18
+ pinned: false
19
+ license: mit
20
+ short_description: 'Agentic AI which is specialised answering emergency calls '
21
+ ---
22
+
23
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
24
+ =======
25
+ Help.me - AI-Powered Emergency Medical Assistance System
26
+
27
+ "Help.me" is an AI-based platform designed to automate and accelerate communication between patients in need of urgent medical care and dispatch operators. The system receives voice messages from patients, analyzes their condition using artificial intelligence, and immediately transmits the information to a dispatcher dashboard for appropriate action.
28
+
29
+ πŸš€ Key Features
30
+
31
+ For Patients (Voice-First Interface):
32
+
33
+ Voice Communication: Patients can report their symptoms simply by speaking.
34
+
35
+ Multilingual System: The AI can communicate with patients in Uzbek, Russian, and English.
36
+
37
+ Smart Recommendations: If a patient's condition is assessed as "Green" (non-urgent), the system recommends public polyclinics or private clinics based on their symptoms.
38
+
39
+ Real-time Response: The AI analyzes the request instantly and provides a voice response.
40
+
41
+ Simplified Interface: The interface is designed to be as simple as possible and voice-focused to avoid distracting the patient in stressful situations.
42
+
43
+ For Dispatchers (Monitoring Dashboard):
44
+
45
+ Real-time Monitoring: All incoming cases are displayed live on the dashboard.
46
+
47
+ Risk Triage: The AI categorizes each case into "Red" (emergency), "Yellow" (uncertain), or "Green" (clinic referral) risk levels.
48
+
49
+ Interactive Map: The locations of all ambulance brigades and clinics are tracked on a map in real time.
50
+
51
+ Statistics & Analytics: Statistical data on cases and brigades are visualized in charts.
52
+
53
+ 🧠 AI Models Used
54
+
55
+ Our system relies on three core AI models:
56
+
57
+ Speech-to-Text (STT):
58
+
59
+ Model: A custom model fine-tuned on top of OpenAI Whisper (medium).
60
+
61
+ Dataset: The model was trained on several datasets tailored to the conditions of Uzbekistan. This includes audio recordings in the Tashkent dialect, standard literary language, and additionally, the Khorezm dialect. This ensures high accuracy in understanding the speech of patients from various regions of the country.
62
+
63
+ Logic and Response Generation (LLM):
64
+
65
+ Model: Google Gemini Flash.
66
+
67
+ Task: To analyze the transcribed complaints from the patient, determine the severity of the situation (risk level), and formulate the text of the response. The model is guided by a strict set of rules and action sequences provided via a SYSTEM_INSTRUCTION.
68
+
69
+ Text-to-Speech (TTS):
70
+
71
+ Model: Facebook MMS (Massively Multilingual Speech).
72
+
73
+ Task: To synthesize the AI-generated response text into a natural-sounding human voice. The system uses separate TTS models for Uzbek and English.
74
+
75
+ πŸ› οΈ Technology Stack
76
+
77
+ Backend: FastAPI (Python)
78
+
79
+ Real-time Communication: WebSockets
80
+
81
+ Database: JSON-based flat files (for MVP)
82
+
83
+ Frontend: HTML, CSS, JavaScript (Vanilla JS)
84
+
85
+ Map: Leaflet.js, Charts: Chart.js
86
+
87
+ βš™οΈ Getting Started
88
+
89
+ 1. Prerequisites:
90
+
91
+ Python 3.9+
92
+
93
+ FFmpeg (must be installed on the system to process audio files)
94
+
95
+ git
96
+
97
+
98
+
99
+ 3. Set Up Virtual Environment and Install Dependencies:
100
+
101
+ # Create a virtual environment
102
+ python -m venv venv
103
+
104
+ # Activate it (Windows)
105
+ venv\Scripts\activate
106
+
107
+ # Activate it (MacOS/Linux)
108
+ source venv/bin/activate
109
+
110
+ # Install the required libraries
111
+ pip install -r requirements.txt
112
+
113
+
114
+ 4. ‼️ IMPORTANT: Download AI Models
115
+
116
+ This repository DOES NOT include the large AI models in the local_models directory. To run the system, you must download them separately and place them in the project folder.
117
+
118
+ Note for the judges: Due to their large size (several GB), it was not feasible to upload the models to GitHub. The repository contains only the project's source code.
119
+
120
+ 5. Run the Application:
121
+
122
+ uvicorn app.main:app --reload
123
+
124
+
125
+ >>>>>>> 0f59686 (Loyiha tayyor: Help.me AI tizimi)