Inspiration

Alzheimer's disease affects over 55 million people worldwide, and one of its most heartbreaking symptoms is the loss of facial recognition — the moment a patient looks at their own child and sees a stranger. We wanted to do something about that. The idea came from thinking about how powerful modern AI has become at recognizing faces, and asking ourselves: what if we put that directly in front of someone's eyes? Meta smart glasses gave us the perfect canvas.

What it does

Nazr is a wearable memory companion built into Meta smart glasses. When an Alzheimer's patient looks at a familiar face, the glasses instantly recognize who that person is and discreetly deliver three key pieces of context through audio: their name, their relationship to the patient, and a summary of their last conversation together. No screens, no buttons, no caregiver needed — just a quiet, dignified reminder delivered right when it's needed most. It also recognizes if the user has drank water, or eaten after a certain amount of time, while simultaneously updating a caregiver dashboard.

How we built it

We used Meta smart glasses as the hardware platform, leveraging the built-in camera to capture the wearer's field of view in real time. That footage is sent to the Gemini API, which handles facial recognition and generates a natural language response with the relevant context. Recognized faces are matched against a stored profile database containing each person's name, relation, and conversation history. The response is then delivered back to the wearer through the glasses' built-in speakers.

Challenges we ran into

Latency — getting the recognition pipeline fast enough to feel natural was one of our biggest hurdles. A few seconds of delay can completely break the moment. Real-world accuracy — facial recognition in uncontrolled lighting conditions, angles, and movement is significantly harder than it looks in demos.

Accomplishments that we're proud of

Built a fully working end-to-end demo — from camera feed to recognized face to audio output — within the hackathon timeframe Successfully integrated the Gemini API for real-time vision and language tasks Built something with genuine real-world impact for one of the most underserved populations in tech

What we learned

How to work with the Gemini API for real-time vision tasks under time pressure That the hardest part of assistive tech isn't the AI — it's the UX: making it feel natural, fast, and trustworthy

What's next for Nazr

Expanding the profile system — letting caregivers and family members easily add and update people through a companion app Reducing latency further through optimized on-device processing

Built With

Share this project:

Updates