🌍 Inspiration
When disaster strikes, time is critical, but many victims face language barriers, poor connectivity, and a lack of coordination between emergency services. We wanted to build a system that could bridge these gaps using voice, AI, and real-time reasoning. Inspired by the growing power of autonomous agents and large language models, we built SentraAI a voice-activated disaster response system designed to route help faster, smarter, and more inclusively.
🤖 What it does
SentraAI takes a spoken emergency report in any language and instantly:
- Transcribes the voice input (via ElevenLabs)
- Translates it to English if needed (via DeepL)
- Uses Perplexity Sonar Pro to determine the correct response agent (Fire, Police, Medical, NGO)
- Dispatches structured data to the right agent handler
- Retrieves contextual info (like nearby hospitals via Apify)
- Displays the incident on a live map and shows the agent's response
🛠️ How we built it
- Voice Input: ElevenLabs API for high-accuracy transcription
- Translation: DeepL API for multilingual input handling
- AI Reasoning: Perplexity Sonar Pro to determine the appropriate agent
- Routing Engine: A central dispatcher that formats data into MCP and routes it to modular agent handlers
- Location Intelligence: Apify API to fetch real-world context (e.g., nearest emergency facilities)
- Frontend: React + Tailwind CSS with Google Maps integration
⚠️ Challenges we ran into
- Building a modular agent system that could interact via MCP format
- Extracting structured location data from unstructured voice input
- Orchestrating multiple asynchronous APIs in real time
- Creating a usable voice interface with minimal friction
🏆 Accomplishments that we're proud of
- Built a fully working multi-agent disaster response system in <6 hours
- Integrated 3 sponsor tools seamlessly ( DeepL, Perplexity, Apify)
- Enabled voice-based emergency reporting with real map visualization
- Designed a dispatcher architecture that mimics real-world emergency communication systems
📚 What we learned
- How to design and implement agent-to-agent AI workflows using MCP
- Best practices for real-time multimodal input processing
- How to combine AI reasoning, translation, voice, and external APIs into a production-ready pipeline
- The importance of clear, structured prompts when working with reasoning LLMs
🚀 What's next for SentraAI
- Deploying the system on mobile for use in low-connectivity environments
- Expanding the agent network (e.g., drones, logistics, shelters)
- Enhancing agent-to-agent memory and conversation history
- Partnering with emergency response organizations to pilot the platform in vulnerable regions
Built With
- apify
- deepl
- elevenlabs
- express.js
- node.js
- perplexity-sonar-pro


Log in or sign up for Devpost to join the conversation.