Inspiration

Every second counts. Emergencies don't wait for professionals to arrive. Anywhere, in schools, clinics, shelters, the first person on the scene is almost never medically trained. In the critical moments before trained responders arrive, it could be the difference between life and death, or permanent damage and not, based on whether or not someone knows what to do. We can turn a bystander into someone capable of saving a life.

We were inspired by the idea that expert guidance should be instantly accessible to anyone at any time, regardless of training. CRIS.ai was created to bridge the gap between crisis and professional care by providing individuals real-time guidance by AI that empowers all people to act with confidence when it matters most.

What it does

CRIS.ai is a spatial AI assistant that is designed to guide everyday people through emergency situations in real time.

When a crisis occurs, the user can activate CRIS.ai on any device. The system listens and visually scans the environment and situation in real time as it unfolds, using live audio and visual input to understand what is happening. It then delivers calm, step-by-step spoken instructions, and also overlaying simple visual guidance directly in the user's field of view.

CRIS.ai continuously adapts, asks clarifying questions, adjusts urgency, and updates instructions based on changing conditions. Whether the situation involves a medical emergency, mental health crisis, or safety incident, the system provides structured, live support during the most critical moments prior to professional help arriving.

How we built it

CRIS.ai combines a native Apple app with a real-time AI backend to provide emergency guidance through voice and visual interactions.

The frontend was built in Swift using SwiftUI to create a clean interface across iOS, macOS, and visionOS. We used AVFoundation to capture live audio and camera input, Apple's Vision framework to detect body movement in real time, and RealityKit to display immersive spatial guidance.

The backend runs on a Python FastAPI server, which handles all of the AI processing, speech recognition, and report generation. For AI reasoning, we used Llama 4 Scout through the Groq API to analyze given situations and generate step-by-step guidance. Speech-to-text was powered through ElevenLabs Scribes with a Whisper fallback, and responses are spoken back to the user using ElevenLabs text-to-speech.

Challenges we ran into

One of the main challenges we faced was integrating multiple development environments and platforms, including Xcode, visionOs, and macOS, which required significant troubleshooting and coordination between the frontend application and backend AI services.

Because CRIS.ai relies on numerous AI tools, integrating speech input, AI reasoning, and generated responses were initially inconsistent at the beginning. We encountered issues with mismatched inputs and outputs, API key limits during testing, and situations where visual overlays were placed incorrectly on the body because of pose detection errors. At times, the system also struggled to adapted appropriately to changing scenarios.

Through continual testing, prompt refinement, and system tuning, we improved reliability and ensured synchronization across components. These challenges ultimately helped us strengthen the architecture and better understand how to better build AI systems that function effectively in live and real-time environments.

Accomplishments that we're proud of

We're proud to have been able to build CRIS.ai, an application that demonstrates how advanced AI technologies can move past just passive chat interfaces and become active real-world assistants. We successfully built a fully functional prototype that integrates spatial computing, conversational AI, computer vision, and automated documentation into a single system. Within a short development period, we created an application capable of guiding users through real-life emergencies while simultaneously preparing professional-grade reports for first responders.

What we learned

Through building CRIS.ai, we learned how important usability and communication design are when implementing AI into high-stakes environments such as emergencies. Technical capability is not just enough in times of crisis, but systems must deliver clear, calm, and intuitive guidance. We also learned the importance of balancing real-time performance with accuracy. Integration voice interaction, visual overlays, and AI reasoning required careful and concise coordination to ensure the system responded quickly and reliably.

Most importantly, building CRIS.ai showed us how AI can empower people rather than replace them, acting as a supporting assistant that helps individuals make confident decisions during critical moments.

What's next for CRIS.ai

CRIS.ai currently stands as just a proof of concept, but our next goal is to expand it fully into an immersive 3D spatial experience. We plan to further develop the platform on AR devices like the Apple Vision Pro, or Meta Ray Ban Glasses, allowing emergency guidance to appear directly within a user's physical environment through spatial computing.

While current hardware remains expensive and not widely accessible, exploring these platforms helps demonstrate how CRIS.ai could function as wearable technology becomes more affordable and mainstream. This next stage would focus on improving interaction, realism, and hands-free guidance, moving us closer to a future where real-time AI assistance is seamlessly integrated into everyday environments.

Built With

Share this project:

Updates