Inspiration
While human empowerment is often discussed in the context of minorities or underrepresented groups, we believe people living with Alzheimer’s are often left out of that conversation. So we asked ourselves: What if people with memory loss had a gentle, intelligent companion to guide, support, and reassure them every day? That’s how REMI was born. Short for Reminiscence, it's an AI-powered memory companion built to promote emotional well-being, routine, and daily cognitive practice — for both patients and the doctors who care for them.
What it does
REMI empowers patients by transforming memory into a personalized, daily practice — starting with simple questions and gradually increasing difficulty based on performance. This adaptive challenge keeps patients engaged while giving doctors visual insights into the performance of the patient.
How we built it
- Python Backend: Handled API calls, core logic, and memory features. Integrated Whisper for speech-to-text, Gemini for conversational AI, and ElevenLabs for voice generation.
- JavaScript Frontend: Displayed reminders, visual prompts, and chat history in a simple, accessible interface using HTML, CSS, and JS.
- Voice Pipeline: Audio input was captured in the browser, sent to the backend for processing, and returned as a natural voice response. APIs Used:
- Whisper – real-time speech-to-text
- Gemini – intelligent, context-aware conversation
- ElevenLabs – realistic and friendly voice output Tools: VS Code, Replit, GitHub for collaboration and rapid prototyping
Challenges we ran into
One of our main challenges was designing a simple, intuitive interface that could be easily used by individuals with cognitive decline. We focused on making the UI feel calm, clear, and ergonomic, avoiding anything that could overwhelm or confuse users. We also spent time researching what people with Alzheimer’s typically forget, how memory recall works, and how to frame questions in a way that feels familiar and supportive. Also, getting front end and back end to link up and transfer audio between each other was extremely challenging, but the mentors were a great help and we were able to do it.
Accomplishments that we're proud of
We built a fully functional prototype of REMI that can initiate a conversation, listen to the user’s spoken input, and generate personalized vocal responses using generative AI.
We also successfully developed the adaptive memory game, which adjusts question difficulty based on user performance and tracks correct answers across sessions.
On top of that, we implemented a doctor interface in Figma to present results and visualize patient progress which would provide doctors with a clear view of cognitive performance over time.
What we learned
- Designing for accessibility means empathy must come first — we learned how to make inclusive, user-centered tech.
- Voice interfaces aren’t just technical challenges — they are emotional bridges between people and machines.
- Collaboration and timeboxing are key: we had to prioritize features that offered maximum impact within our constraints.
- AI tools can be incredibly powerful when used intentionally — aligning tech with human empowerment makes all the difference.
What's next for Remi — A Friend That Remembers
In the future, we want Remi to grow into a more complete and personal memory companion. It already collects and learns from past conversations with the user, and we plan to build on that by adding visual memory support like familiar photos and faces. Over time, Remi will be able to recall meaningful personal details like names, places, and life events, helping patients stay connected to their memories and the people around them in a way that feels natural, supportive, and familiar.
Built With
- api
- elevenlabs
- figma
- gemini
- github
- javascript
- node.js
- openai
- python
- vscode
- whisper

Log in or sign up for Devpost to join the conversation.