What if it was just a matter of calling for medical help?

Our team kept coming back to the same problem-people often delay care because it is confusing, inconvenient, or inaccessible. Many don't have access to apps, let alone stable internet, or even the knowledge to interpret their own medical reports. We wanted to build something that worked for everyone, regardless of tech literacy or device access.

That's where DoctorVoice was born-an AI-powered healthcare assistant that you can simply call. You speak, it listens; you describe your symptoms, it interprets them; you need a doctor, it schedules one. No apps, no forms, no waiting-just a voice that understands and acts.

At first, we thought it would be easy: connect speech to text, feed it to a model, and return some health insights. But what we built became much more.

We taught DoctorVoice to read complex medical reports, understand lab results, and even predict long-term health risks. Using Gemini AI, VAPI, and ElevenLabs, we created a voice that could have a natural and medically accurate conversation. The MCP server on Dedalous enabled real-world actions such as scheduling appointments and sending confirmation texts over Photon SMS.

There were tough moments: voice delays, unpredictable AI outputs, and parsing messy PDFs that looked like they were scanned in 2005. But every time we fixed one problem, we got closer to something that felt truly alive. The first time we called our system and heard it respond clearly, book an appointment, and send us a confirmation text-we realized we'd built more than just a hackathon project.

We'd built the future of accessible healthcare. DoctorVoice gives people control over their health using something they already have: their voice. No barriers, no apps. Just a conversation that may change a life.

Built With

Share this project:

Updates