Inspiration

Access to basic healthcare shouldn’t depend on where you live, what language you speak, or whether you can read and type. Yet for over a billion people in underserved communities, those are exactly the barriers that prevent them from getting even the most basic medical guidance.

Most digital health tools assume users are literate, English-speaking, and have stable internet. That excludes millions of people who rely primarily on voice and local languages to communicate.

MediVoice AI was inspired by a simple question: What if anyone could just speak their symptoms and get immediate, understandable guidance, no typing, no barriers?

What it does

MediVoice AI is a voice-first, multilingual health triage web app that allows users to describe their symptoms naturally and receive instant AI-powered guidance.

Users:

Select their language (Yoruba, Hindi, Swahili, Portuguese, or English) Tap a microphone button and speak their symptoms Receive a structured response including: Severity level (low, moderate, urgent, emergency) Likely condition 3–4 recommended next steps

The response is displayed visually and can also be spoken back to the user in their own language.

The entire experience is designed to work on low-end devices, slow networks, and without requiring sign-up or installation.

How we built it

MediVoice AI combines voice processing, real-time AI inference, and a lightweight web interface:

Frontend: React + Vite + Tailwind CSS Voice Input: Web Speech API for real-time transcription AI Processing: Llama 3.3 70B running on Groq Prompt Engineering: Structured prompts to ensure consistent JSON output Voice Output: Web Speech Synthesis API for audio playback Deployment: Vercel

The system converts spoken input into text, sends it to the AI model, receives structured triage data, and renders it into a clear, visual interface.

Challenges we ran into

Ensuring the AI consistently returned structured JSON instead of free-form text Handling API limitations and maintaining reliability under constraints Designing a voice-first experience that works for low-literacy users Optimizing performance for slow or unstable internet connections Balancing accuracy with responsible messaging (avoiding misinterpretation as medical diagnosis)

Accomplishments that we're proud of

Built a fully functional voice-first health triage system end-to-end Successfully implemented multilingual support across five languages Achieved real-time AI responses using high-performance inference Designed a system that works without sign-up, installation, or data storage Created an interface accessible to users with low literacy and limited technical experience

What we learned

How to design AI systems that are reliable and structured, not just intelligent The importance of prompt engineering for consistent outputs How to build for accessibility and real-world constraints, not ideal conditions The trade-offs between real-time AI capabilities and system stability How to simplify complex AI outputs into clear, actionable user experiences

What's next for Medivoice AI

Expanding language support to include more regional dialects Adding offline capabilities using lightweight on-device models Integrating with local healthcare providers and emergency services Introducing SMS-based access for users without smartphones Improving triage accuracy with domain-specific fine-tuning

Built With

Share this project:

Updates