Inspiration
Medical emergencies are chaotic, especially for people without formal training. We were inspired by the critical gap between the moment someone calls 911 and the moment first responders arrive. On average, that wait can take around 8 minutes — and during those minutes, immediate action can save lives. We wanted to explore how AI could provide calm, accessible, real-time guidance that empowers everyday people to help instead of panic.
What it does
Vive is a mobile-first emergency response platform powered by Google Gemini Live API. The app uses live camera input, audio, and voice interaction to guide users through medical emergencies in real time while waiting for professional help to arrive.
Vive provides:
Live step-by-step voice guidance Camera-based contextual understanding Phone-down hands-free mode CPR pacing assistance with audio rhythm cues AED location support Session summaries with timestamps and exportable logs for EMTs
How we built it
We built Vive as a mobile-first progressive web application focused on accessibility and speed during high-stress situations.
Our stack included:
Google Gemini Live API for real-time multimodal interaction Live camera and microphone streaming Voice synthesis and speech recognition A responsive mobile UI optimized for emergency readability State management for switching between active camera mode and phone-down mode
We designed the interface to minimize cognitive overload by using large high-contrast instructions, simple controls, and voice-first interaction.
Challenges we ran into
One of our biggest challenges was designing an experience for users under extreme stress. We had to constantly simplify the interface and reduce unnecessary interactions while still providing enough information to be useful.
Another challenge was handling real-time multimodal AI interactions. Managing live camera context, audio input, voice responses, and session continuity simultaneously required careful coordination between frontend state management and Gemini’s live capabilities.
We also had to think critically about safety, liability, and how to clearly position the app as guidance — not a replacement for emergency responders or medical professionals.
Accomplishments that we're proud of
We’re proud that Vive transforms advanced AI into something immediately human and practical. Instead of building a generic chatbot, we created a system focused on real-world emergency intervention.
Some accomplishments we’re especially proud of include:
Creating a seamless real-time guidance experience Implementing phone-down CPR support Designing a calm, accessible emergency UI Integrating multimodal AI interaction into a high-pressure use case Building a concept that could genuinely help save lives
What we learned
Through building Vive, we gained experience working with multimodal AI systems and learned how difficult real-time context handling can be when combining video, audio, and conversational interaction.
Most importantly, we learned how AI can be used not just for productivity, but for meaningful human-centered assistance.
What's next for Vive
In the future, we want to expand Vive beyond our initial emergency categories and improve the intelligence and reliability of the system.
Future goals include:
Additional emergency scenarios like severe bleeding, allergic reactions, and overdoses Better AED and hospital integration Multilingual support Offline fallback guidance Wearable and smartwatch integration Live Activity / Picture-in-Picture support during emergencies Collaboration with medical professionals for validation and safety testing
Our long-term vision is for Vive to become a trusted emergency companion that helps bridge the gap between panic and professional care.
Log in or sign up for Devpost to join the conversation.