Inspiration
The heart of Geni comes from a place of love and worry. Many of us have grandparents who cherish their independence but shouldn't be left alone. We’ve lived through the anxiety of knowing a loved one has fallen in the past or fearing they’ve skipped the very medication that keeps them stable. We wanted to build a "Guardian Intelligence" that acts like a caring family member who is always in the room. Instead of invasive surveillance, Geni uses advanced AI to understand movement and safety, providing a 24/7 safety net that protects our elders' health while respecting the privacy of their home.
What it does
Geni is an intelligent sensing layer that functions as both an emergency responder and a proactive healthcare companion: -Life-Saving Detection: Using YOLOv8 and the Gemini API, Geni identifies dangerous falls or faints. If a user goes down, the system doesn't just watch—it acts. It triggers an ElevenLabs AI voice check-in to see if they are okay. -Emergency Escalation: If the user is unresponsive, Geni immediately dispatches a Twilio WhatsApp alert of caretakers. -Proactive Healthcare: Geni uses Gemini's Vision capabilities to actually "see" and understand the environment. It verbally reminds users to take their medicine and analyzes the scene to confirm they are handling the correct pill bottles, notifying family instantly if a dose is missed.
How we built it
Geni is a sophisticated integration of vision and communication. We built the caretaker dashboard using React and TypeScript for a seamless user experience. The "eyes" of the system run on YOLOv8 for pose estimation, while the "brain" is powered by the Gemini API and Claude AI, which analyzes the visual context to understand pill types and user distress. The backend logic is written in Python, orchestrating ElevenLabs for natural vocal interactions and Twilio for the emergency messaging pipeline.
Challenges we ran into
Our biggest hurdle was the "Mental Load" of the AI. In the beginning, Geni struggled to perform fall monitoring and complex medication analysis at the same time without lagging. We also faced a technical nightmare with the Twilio and AI Voice sync—Windows kept launching external media players for every voice clip, which interrupted our emergency triggers. We eventually overcame this by moving to background subprocesses, allowing the AI to speak and text simultaneously without skipping a beat.
Accomplishments that we're proud of
We are incredibly proud of building a system that actually understands the context of a room. Successfully using the Gemini API to distinguish between a user just sitting down and a medical emergency was a huge win. Seeing the system successfully bridge the gap between a camera feed and a life-saving WhatsApp message in under 4 seconds made all the late-night debugging worth it.
What we learned
We gained deep experience in real-time computer vision and the complexities of human pose estimation. We learned how to handle asynchronous API calls under pressure and how to bypass operating system limitations for seamless audio playback. Most importantly, we learned how to design tech specifically for Accessibility, ensuring that our interface is natural and low-stress for seniors.
What's next for Geni
-Geni Home Hub: We plan to transition Geni into a dedicated, portable hardware device—much like an Amazon Alexa—that integrates directly with existing home security cameras. -HealthSync: Integrating heart-rate wearables to provide a 360-degree view of the user's vitals, even- when they are in another room. -Doctor Integration: Expanding Gemini’s analysis to automatically generate weekly health adherence reports that can be sent directly to a user's primary care physician.
Log in or sign up for Devpost to join the conversation.