RESQ AI – AI-Powered Emergency Intelligence System 🚨 Inspiration
India handles over 200 million emergency calls annually, yet most dispatch systems still rely on manual transcription, fragmented workflows, and rule-based prioritization. During research into emergency response infrastructure, we discovered that 3–5 minutes are often lost in incomplete information gathering and manual triage. In life-critical scenarios, even a 60-second delay can determine survival. We asked a simple question: What if emergency triage could think, extract, and prioritize in real time? That question led to RESQ AI. 🧠What RESQ AI Does RESQ AI is an AI-assisted emergency triage and dispatch coordination platform designed to: Convert live calls to text using Speech-to-Tex Extract critical information using a Local LLM Classify emergency priority using ML Geocode locations instantly Broadcast structured data to dispatch dashboards in real time Mathematically, priority is determined using a hybrid scoring model: PriorityScore=α(MLconfidence)+β(KeywordSeverity)+γ(CallerStressSignals) α,β,γ are weighted factors ML confidence is derived from a HuggingFace transformer model Keyword severity is rule-based fallback logic Stress signals include panic keywords and long pauses 🏗 How We Built It We designed RESQ AI as a modular, privacy-first architecture: Tech Stack: Twilio (Speech-to-Text) Ollama (Local LLM – Neural Chat 7B) HuggingFace Transformers (Priority Classification) OpenStreetMap / Nominatim (Geocoding) Node.js + Express (Backend) SQLite (Audit logs & storage) Socket.IO (Real-time dashboards) Workflow: Caller dials emergency number Audio → Speech-to-Text LLM extracts: emergency, location, name, phone ML classifies urgency Data validated & stored Real-time dashboard updates dispatcher queue The system processes calls in under 2 seconds with ~92% extraction accuracy. This hybrid approach ensures reliability even if ML fails. ⚙️ Challenges We Faced 1️⃣ Information Extraction Accuracy Emergency calls are chaotic. Background noise, panic, and unclear locations required: JSON schema validation Retry mechanisms Fallback rule-based extraction 2️⃣ Balancing ML & Reliability Pure ML can fail in edge cases. We implemented transparent fallback rules to ensure 100% operational continuity. 3️⃣ Real-Time Synchronization Maintaining live queue updates required efficient WebSocket broadcasting and priority-based ordering: QueueOrder=ORDER BY (Priority, Timestamp) 4️⃣ Privacy & Cost Constraints We avoided heavy cloud dependency by: Running LLM locally via Ollama Using SQLite for lightweight storage Keeping per-call cost < $0.05
Built With
- express.js
- huggingface
- llm
- machine-learning
- natural-language-processing
- neural-chat-7b
- node.js
- ollama
- socket.io
- sqlite
- twilio


Log in or sign up for Devpost to join the conversation.