Inspiration

In traditional healthcare systems, post-operative monitoring often fails elderly patients due to infrequent check-ins, complex interfaces, and delayed symptom reporting. We saw elderly patients struggling with digital health apps and hospitals overwhelmed by manual follow-ups. EchoCare was born from the urgent need to make recovery monitoring natural, accessible, and proactive—using the power of voice AI to bridge the gap in geriatric care.

What It Does

EchoCare is a voice-first healthcare companion that conducts daily check-ins with post-operative patients through natural conversations. Patients simply speak about their symptoms, pain levels, and medication adherence, and EchoCare responds empathetically while flagging critical health issues in real-time. The system automatically detects "health flags" like severe pain or medication issues, logging them for healthcare providers while providing patients with immediate guidance—all through an elderly-friendly, high-contrast interface with adjustable text sizes.

How We Built It

We built EchoCare using Next.js 15 with TypeScript for a robust, scalable web app, paired with Tailwind CSS and ShadcnUI for an accessible UI. The brain of the system runs on Nexos.ai, which intelligently routes requests to Claude-3.5-Sonnet as the primary LLM, with automatic fallback to GPT-4o for 100% uptime. For the empathetic nurse voice, we integrated ElevenLabs' advanced TTS with a professional voice model that ensures low-latency, natural responses. Web Speech API handles speech-to-text, creating a seamless voice loop that feels like talking to a real nurse.

Challenges We Overcame

One major challenge was optimizing latency for voice interactions—patients expect immediate responses, but LLM processing can take seconds. We solved this by keeping AI responses under 40 words and implementing ElevenLabs' streaming TTS. Microphone permissions also proved tricky on mobile devices; we added graceful error handling with retry buttons and browser-specific guidance. Finally, ensuring AA/AAA accessibility compliance while maintaining visual appeal required extensive color contrast testing and user feedback iteration.

Accomplishments We're Proud Of

Our model fallback system is a standout achievement—using Nexos.ai's gateway, EchoCare seamlessly switches from Claude-3.5-Sonnet to GPT-4o if the primary model fails, ensuring zero downtime. This "system resilience" was crucial for healthcare reliability. We're also proud of our health flag detection, which automatically highlights keywords like "pain" or "severe" in conversation logs, enabling proactive interventions. The accessibility features, including a text size toggle and pulsating mic animations, make EchoCare truly inclusive for elderly users.

What We Learned

This project taught us the critical importance of accessibility in MedTech—elderly users have unique needs that standard apps ignore. We learned that AI voice interfaces must prioritize empathy and brevity to build trust. Nexos.ai showed us the power of intelligent model routing for production AI systems, while ElevenLabs demonstrated how voice quality can transform user experience from functional to emotionally supportive.

What's Next for EchoCare

We're excited to expand EchoCare with vocal biomarker analysis to detect stress or pain through voice patterns, and integrate with hospital EMR systems for seamless data sharing. Future versions will include multi-language support and integration with wearable devices for comprehensive recovery tracking, ultimately scaling to become the standard for remote post-operative care worldwide.

Built With

Share this project:

Updates