Remember Me
Inspiration
Over 55 million peoplel worldwide live with varying stages of dementia. This is projected to triple by 2050. We were inspired by the emotional and practical challenges both patients and caregivers face seeing family members and people around us be affected by such struggles on a daily basis. Moments of confusion can create fear, stress, and loss of independence, and we wanted to build something that could help restore confidence and connection.
Remember Me was inspired by the idea that technology can serve as an external layer of memory support. Instead of replacing human care, we wanted to create a tool that strengthens it by helping patients better understand what is around them and go about their day with a heightened sense of normalcy and comfort as can be provided.
What it does
Since most present solutions for memory retention are basic reminder apps, RememberMe instead addresses something more fundamental: the erosion of relationships and identity in the daily lives of the patients often making it difficult to recognize loved ones and remember important daily tasks. Remember Me is an assistive memory support tool designed for individuals with dementia. It helps users recognize familiar people and recall important tasks in real time.
The system has two main parts: the vision system and the dashboard.
The vision system is the patient-facing interface. It uses a camera feed, ideally from AR glasses but currently from a webcam, to detect and recognize familiar faces. When the system identifies someone it knows, it retrieves stored memories about that person and displays their name and relationship, such as daughter or friend. It can also use the ElevenLabs API to generate real-time audio that explains who the person is in more detail by summarizing stored memories. This is ideally triggered by voice input, such as when the patient quietly asks, “Who is this?”
The vision system also supports reminders. Patients can add tasks and scheduled events through the dashboard, and those reminders appear visually in the interface and are spoken aloud 5 minutes before the event.
In addition, the system records conversation data, summarizes it with an LLM, and stores it in the memory database. That way, future interactions become part of the patient’s memory support system.
The dashboard is used by both patients and caretakers. Patients can manage their own information, memories, and schedules. Caretakers can access assigned patients, view and edit their memory tree, and help manage reminders and important information.
How we built it
We built Remember Me as a full-stack application with two frontend experiences and one backend service.
Frontend
We used React, Vite, and TypeScript to build both the visual interface and the dashboard experience.
Backend
We used FastAPI for the backend and SQLite for the database. The database stores familiar faces, related memories, user information, and scheduled reminders.
AI and Integrations
We used MediaPipe on the client side for face detection and InsightFace / ArcFace on the server side for face recognition. This allowed us to separate lightweight face detection from more advanced recognition logic.
For audio output, we integrated the ElevenLabs text-to-speech API so the system can speak reminders and memory summaries aloud. We also explored speech-to-text for voice-triggered interactions.
To make the system more adaptive over time, we used an LLM to process recorded conversations, compact them into short summaries, and store those summaries as memories connected to recognized people.
Challenges we ran into
One of our biggest challenges was balancing real-time performance with limited hardware. Our ideal setup involved AR glasses, but due to equipment constraints, we had to rely on webcams, which meant we needed to adapt the experience while still keeping it practical and responsive.
Another challenge was building a facial recognition flow that felt helpful rather than intrusive or unreliable. In a tool designed for dementia support, accuracy matters a lot. Incorrect recognition could confuse users rather than help them, so we had to think carefully about when and how information is presented.
We also ran into challenges with summarizing conversations into useful memories. It is easy for an LLM to generate text, but much harder to make those summaries concise, relevant, and helpful in a memory-support context.
Designing for accessibility was another major challenge. Because this product is intended for individuals with cognitive difficulties, every interaction had to be simple, clear, and low-friction. That influenced both the dashboard design and the authentication flow.
Finally, handling the many-to-many relationship between caretakers and patients added complexity to the data model and access control logic.
Accomplishments that we're proud of
We are proud that we built a system that combines computer vision, voice technology, authentication, memory storage, and LLM summarization into one cohesive experience.
We are especially proud of the fact that Remember Me is not just a technical demo. It is a thoughtful assistive tool built around a real human need. The face recognition pipeline, the audio memory recall, the reminder system, and the caretaker dashboard all work together toward a meaningful purpose.
We are also proud of creating a passkey-focused authentication flow that reduces the burden on patients, and of building a dashboard that supports both independent use and caregiver collaboration.
What we learned
We learned that building for accessibility requires more than simplifying a user interface. It requires rethinking the entire system around the user’s lived experience.
We also learned how challenging it is to combine multiple AI systems into a real-time product. Face detection, face recognition, text summarization, speech synthesis, and scheduling all have to work together smoothly for the experience to feel natural.
Most importantly, we learned that technology can do more than automate tasks. It can support dignity, independence, and connection for people facing memory loss.
What's next for Remember Me
Our next step is to move closer to the original vision of using AR glasses instead of webcams, creating a more seamless and wearable experience for patients.
We also want to improve the quality and reliability of face recognition, make memory summaries more personalized and context-aware, and strengthen reminder features so they adapt better to the patient’s daily routines.
On the dashboard side, we want to expand the caretaker experience with better patient insights, easier memory editing, and improved schedule management.
Long term, we want to continue refining Remember Me into a practical assistive tool that can genuinely improve quality of life for dementia patients and the people who care for them.
Built With
- arcface
- auth0
- elevenlabs-api
- fastapi
- insightface
- mediapipe
- react
- sqlite
- typescript
- vite

Log in or sign up for Devpost to join the conversation.