Inspiration
We created Echo to address the growing loneliness, confusion, and loss of independence experienced by people with Alzheimer’s.
What it does
Echo uses real-time voice input and contextual AI processing to provide personalized cognitive support for Alzheimer’s and dementia patients. Through speech-to-text and conversational AI, Echo understands user conversations, routines, reminders, and behavioral patterns to generate intelligent spoken responses that assist with memory recall, daily navigation, medication schedules, emotional reassurance, and caregiver communication. By continuously analyzing interaction data, Echo acts as an adaptive cognitive companion that promotes independence, safety, and long-term cognitive monitoring.
How we built
We built Echo using React.js, Node.js, Express.js, and modern frontend modules like React Router, Tailwind CSS, and Axios, while integrating the OpenAI API with Whisper STT and TTS voice technology to create a real-time AI cognitive companion for Alzheimer’s and dementia patients. Echo supports natural voice conversations, memory recall assistance, medication and routine reminders, health tracking, caregiver alerts, personalized responses, conversation memory, and cognitive monitoring through real-time AI processing and MCP architecture.
Challenges we ran into
One of the biggest challenges during development was implementing reliable voice interaction. Since Echo is designed to function as a conversational cognitive companion, creating natural spoken responses required integrating external TTS systems and managing API credit limitations for continuous voice generation.
Accomplishments that we're proud of
We are proud of building Echo into a multilingual AI cognitive support platform capable of understanding and responding in multiple languages, making it accessible to a wider range of Alzheimer’s and dementia patients and their families. We also developed a clean, intuitive user interface designed for elderly users, prioritizing simplicity, readability, and ease of navigation
What we learned
We learned how to connect AI APIs to a website by sending user input from a React frontend to a Node.js backend, which securely calls services like the OpenAI API and returns the response. The backend then processes the AI output (text or speech) and sends it back to the frontend to display or play as voice in real time.
What's next for Echo
Expanding Echo into a B2B healthcare solution by partnering with memory care facilities, hospitals, and caregivers to integrate the AI companion into daily patient support workflows and improve quality of life for individuals living with Alzheimer’s and dementia.
Built With
- and-real-time-ai-processing-technologies-to-power-conversational-memory-assistance
- axios
- builtwithreact.js
- express.js
- health-tracking
- javascript
- mcp-architecture
- node.js
- openai-api
- react-router
- reminders
- tailwind-css
- tts-voice-synthesis
- whisper-stt
Log in or sign up for Devpost to join the conversation.