The Problem

When someone starts choking, their chance of survival drops dramatically every minute without help. The biggest problem isn't a lack of trained people, rather the "panic gap." Under extreme stress, even skilled individuals freeze, forget their training, and struggle to act effectively. Whether it's a coworker in an office or a student on a campus, that critical window for life-saving help is often lost to confusion and fear. The knowledge exists, but the real-time, calm guidance to execute it under pressure does not.

Our Solution

AURA is an AI co-pilot for medical emergencies, built on the MetaQuest platform. It uses augmented reality and multi-modal AI to turn any bystander into a confident first responder.
Our system provides hands-free, step-by-step guidance:

  • It Sees: Computer vision analyzes the scene to assess the emergency.
  • It Listens: Natural language processing understands your voice commands, even when panicked.
  • It Guides: AR shows you exactly what to do, like projecting where to position your hands for abdominal thrusts or showing an arrow for the correct motion.
  • It Communicates: The AI can automatically call 911 and deliver a clear, structured report to dispatchers, so your hands never leave the person in need.

AURA doesn't just give information; it manages the entire response, adapting its guidance to the situation and the user's actions.

How We Built It

We built AURA on VSCode and Unity, integrating a multi-modal AI stack:

  • The Platform: We used the Unity engine to develop the core application for the MetaQuest 3 headset.
  • The Eyes (Vision): We built and trained a custom computer vision model to recognize critical postures, like a person clutching their throat or appearing to choke, with high accuracy.
  • The Brain (Language): We integrated Google's Gemini API for natural language understanding, allowing users to interact with AURA using their voice.
  • The Voice (Audio): We used a premium text-to-speech engine to generate the calm, synthetic voice that provides audio coaching and communicates with 911.

Challenges We Ran Into

  • Getting the computer vision model to be reliable across different real-world environments (lighting, clothing, obstacles) was a major hurdle.
  • Creating a seamless loop between voice input, AI processing, and AR visual output was critical. Any delay breaks the user's trust.
  • Designing an interface that is intuitive enough to be used under extreme stress required many iterations. We had to strip away all complexity.

What We Learned

  • The human factor is as important as the tech. The interface must be designed for panic, not just for usability.
  • You don't need a giant AI model for effective, life-saving guidance; a well-architected, specialized system is far more powerful.
  • Integrating with the existing emergency response ecosystem (like 911 protocols) is essential for real-world impact.

Accomplishments That We're Proud Of

  • We built a functioning prototype that can successfully guide an extremely panicked user through a thorough and complex procedure such as performing the Heimlich maneuver when someone is choking.
  • Our innovation is a self-correcting system. It automatically sees what's happening, makes a decision, and takes action—instantly.

What's Next for AURA

  • Expanded Protocols: Teaching AURA to guide users through more emergencies, such as severe bleeding and cardiac arrest.
  • Real-World Pilots: Deploying AURA with pilot partners in corporate offices, universities, and gyms to test and refine the system in real environments.
  • Platform Evolution: Exploring integrations with public safety infrastructure to make AURA a standard layer of community health and safety.

Additional Info

Our main code and repo ended up crashing. Here's the repo where we salvaged our working code: https://github.com/tejasri106/working-choking-detector

Built With

Share this project:

Updates