Inspiration
Dementia and short term memory loss are isolating. People often forget their train of thought or if they took medication. We built a safety net: an empathetic companion that sits quietly in the background and helps only when needed, replacing manual journaling.
What it does
Memory Anchor acts as your short term memory. It runs quietly in your browser, buffering the last 5 minutes of webcam video and audio. Feeling confused? Click "What was I doing?" to hear a comforting AI summary of your recent actions. We also built an "AI Observed Todo List" that watches your footage and automatically checks off tasks like "Take medicine" as you do them.
The Curb Cut Effect
Built for dementia, Memory Anchor perfectly illustrates the Curb Cut Effect: a disability solution that benefits everyone. Whether you have ADHD, frequently forget why you entered a room, or hate manual habit tracking, this passive life logger helps. By automatically checking off daily goals, it takes the mental load off anyone and keeps you grounded in the present moment.
How we built it
We used React, Vite, and Tailwind CSS for the frontend. The brain is Google Gemini 2.5 Flash, which processes a massive single prompt of base64 images, audio chunks, and todo lists. We used ElevenLabs for warm text to speech and wrote a custom script to manage browser memory by saving only the most visually significant frames.
Challenges we ran into
Deployment was our biggest hurdle. The app worked locally, but pushing to Vercel caused serverless timeouts, Vite proxy issues, and 502 Bad Gateway errors due to our massive image and audio payloads. Securing environment variables without breaking the frontend was also tricky. On the AI side, making Gemini actually understand our to-do list required prompt engineering to inject active tasks into the system prompt dynamically.
Accomplishments that we're proud of
We are incredibly proud of the "AI Observed Todo List." Getting an AI to autonomously cross tasks off a checklist by watching a video feed feels like a huge leap for accessibility. We are also proud of overcoming deployment nightmares to get a multimodal AI app running smoothly for our demo.
What we learned
We learned "it works on my machine" is the scariest phrase in a hackathon. We got a crash course in managing Vite configs, Vercel routing, and API keys. We also saw firsthand how fast and capable Gemini 2.5 Flash is at analyzing multimodal timelines.
Log in or sign up for Devpost to join the conversation.