RESQ AI – AI-Powered Emergency Intelligence System --Inspiration India handles over 200 million emergency calls annually, yet most dispatch systems still rely on manual transcription, fragmented workflows, and rule-based prioritization. During research into emergency response infrastructure, we discovered that 3–5 minutes are often lost in incomplete information gathering and manual triage. In life-critical scenarios, even a 60-second delay can determine survival. We asked a simple question: What if emergency triage could think, extract, and prioritize in real time? That question led to RESQ AI. --What RESQ AI Does: RESQ AI is an AI-assisted emergency triage and dispatch coordination platform designed to: Convert live calls to text using Speech-to-Text Extract critical information using a Local LLM Classify emergency priority using ML Geocode locations instantly Broadcast structured data to dispatch dashboards in real time Mathematically, priority is determined using a hybrid scoring model: PriorityScore=α(MLconfidence)+β(KeywordSeverity)+γ(CallerStressSignals) 𝛼,𝛽,𝛾 α,β,γ are weighted factors ML confidence is derived from a HuggingFace transformer model Keyword severity is rule-based fallback logic Stress signals include panic keywords and long pauses This hybrid approach ensures reliability even if ML fails.
--How we build We designed RESQ AI as a modular, privacy-first architecture: Tech Stack: Twilio (Speech-to-Text) Ollama (Local LLM – Neural Chat 7B) HuggingFace Transformers (Priority Classification) OpenStreetMap / Nominatim (Geocoding) Node.js + Express (Backend) SQLite (Audit logs & storage) Socket.IO (Real-time dashboards) Workflow: Caller dials emergency number Audio → Speech-to-Text LLM extracts: emergency, location, name, phone ML classifies urgency Data validated & stored Real-time dashboard updates dispatcher queue The system processes calls in under 2 seconds with ~92% extraction accuracy.
----Challenges We Faced 1) Information Extraction Accuracy Emergency calls are chaotic. Background noise, panic, and unclear locations required: JSON schema validation Retry mechanisms Fallback rule-based extraction
2) Balancing ML & Reliability Pure ML can fail in edge cases. We implemented transparent fallback rules to ensure 100% operational continuity.
3)Real-Time Synchronization Maintaining live queue updates required efficient WebSocket broadcasting and priority-based ordering: QueueOrder=ORDER BY (Priority, Timestamp)
4)Privacy & Cost Constraints We avoided heavy cloud dependency by: Running LLM locally via Ollama Using SQLite for lightweight storage Keeping per-call cost < $0.05
----Impact RESQ AI reduces: Dispatcher workload by ~60% Average response time by up to 40% Per-call operational cost to ~$0.02 It enables: Transparent audit logs Standardized triage Scalable deployment across 112 call centers We are not replacing dispatchers — we are empowering them with intelligence.
Built With
- css
- deepgramapi
- express.js
- html
- huggingface
- javascript
- llm
- machine-learning
- natural-language-processing
- neural-chat-7b
- node.js
- ollama
- openstreetmap
- socket.io
- sqlite
- twilio


Log in or sign up for Devpost to join the conversation.