Inspiration

Clarity was inspired by a simple but painful reality: dementia can make everyday life feel unpredictable, confusing, and unsafe. Families want to help, but they cannot be physically present every second of the day. We wanted to build something that could give patients a greater sense of independence while also giving caregivers more peace of mind.

We were especially drawn to the idea of making assistance feel calm and human instead of clinical. Rather than building a system that just sends alerts, we imagined a companion that could recognize familiar faces, verify medications, respond to speech, and gently guide someone through moments of uncertainty. That vision became Clarity.

What it does

Clarity is a dementia-assistance system with three connected parts: a patient-facing iPhone app, a local FastAPI backend for AI inference and scheduling, and a caregiver dashboard built with Next.js. The phone acts as a wearable assistant, while the caregiver dashboard gives family members a live window into safety and routine.

For the patient, Clarity provides a live camera view with detection overlays, a voice assistant that can answer questions aloud, family face recognition, medication verification, and GPS geofencing. For example, when a medication reminder goes off, the patient can hold up a bottle and Clarity confirms whether it is correct. If the patient leaves a safe zone, the app speaks a calm alert and logs the event for caregivers.

For the caregiver, the dashboard includes family photo uploads, safe-zone management on a map, reminder creation, a room view, and a live event log that updates in real time. This makes it easier to monitor routines and respond quickly when something unusual happens.

How we built it

We built Clarity as a full-stack system designed to run privately over Tailscale rather than relying on public cloud deployment. The patient app was built in Expo React Native, the backend in FastAPI, and the caregiver dashboard in Next.js. Supabase handled shared data and realtime event updates.

On the AI side, we used a custom YOLOv11 model trained on the specific family members and medication bottles used in the demo. That let us support both face recognition and medication verification without depending on expensive third-party vision APIs. We paired that with Gemini for conversational responses and ElevenLabs for speech features, creating an experience that feels interactive rather than just reactive.

We also designed the system so that the iPhone could communicate directly with the backend over Tailscale private networking. That choice made the architecture more secure and more demo-friendly, since the whole experience could run locally on a MacBook and phone without needing public infrastructure.

Challenges we ran into

One of our biggest challenges was integrating many moving parts into a single smooth experience. Clarity is not just a mobile app or just an AI model. It combines computer vision, voice interaction, scheduling, realtime logging, geofencing, and a caregiver dashboard, so reliability across the full pipeline mattered just as much as any one feature.

Another challenge was making the project practical within hackathon constraints. We needed accurate enough detection for specific faces and bottles, low-latency communication between the phone and backend, and a setup simple enough to demo live. Training and wiring a custom YOLO model into the backend while also making the frontend feel polished was a major balancing act.

We also had to think carefully about privacy and deployment. Since this is a sensitive use case, we did not want to depend on public cloud infrastructure for the core demo. Setting up the system to run over Tailscale and keeping the dashboard locally accessible added complexity, but it made the project feel much more aligned with the real-world problem we were trying to solve.

What's next for Clarity

Next, we want to make Clarity more robust and more personalized. That includes improving detection accuracy across lighting and environmental changes, expanding medication recognition beyond a small demo set, and making voice interactions more context-aware.

We also want to add authentication and production-ready security to the caregiver dashboard, support more scalable deployment options, and improve the safe-zone and room-monitoring experiences. Over time, we could see Clarity evolving from a hackathon prototype into a real assistive platform for families navigating memory-related conditions.

At its core, our goal stays the same: help patients preserve independence, help caregivers stay informed, and make moments of confusion feel a little more manageable.

Share this project:

Updates