Inspiration

The roots of Yuddhishtir lie in a deeply personal moment of uncertainty—a moment that made the abstract, often invisible barriers to healthcare access feel painfully real. My grandmother and I once found ourselves frantically trying to determine whether she was covered under the Ayushman Bharat scheme, India’s flagship health insurance program. There was no clear answer, no accessible interface, no sense of control. The fear of not knowing—not knowing if help would come in time or whether we were eligible—was paralyzing. That singular experience sparked the idea. But it wasn’t just about us. It reflected a broader, systemic issue: the fear of the unknown that haunts millions in developing nations. Healthcare in many third-world countries is not only hard to access—it’s also shrouded in stigma, bureaucracy, and technological opacity. The idea that one could fall ill and not know where to go, what to do, or even whether they are entitled to help—it’s a terrifying reality that often goes unnoticed by those in more privileged systems. With Yuddhishtir, I wanted to bring clarity to chaos.

What it does

Yuddhishtir is an AI-powered voice assistant designed to demystify healthcare access in developing nations, starting with India. At its core, it's a conversational bridge between citizens and the bureaucratic systems that govern medical aid—delivering answers with clarity, empathy, and cultural sensitivity. Here’s what Yuddhishtir can do:

  • Voice-First Eligibility Checks: Users can ask, “Am I eligible for Ayushman Bharat?” or “Is this hospital covered under the scheme?” in natural language—spoken or typed.
  • Semantic Search Over Government Programs: A robust RAG (Retrieval-Augmented Generation) pipeline parses user queries and pulls relevant information from structured datasets and unstructured policy documents.
  • Emotional Text-to-Speech Output: Through ElevenLabs, it speaks in a warm, expressive voice—bridging the emotional disconnect that often accompanies robotic systems.
  • Elder-Friendly UX: The design prioritizes legibility, ease of use, and verbal interaction—ideal for people with limited digital literacy.
  • Secure Query Handling: Sensitive queries are processed with minimal personal data logging, incorporating secure endpoints and privacy-respecting architecture.
  • Multilingual Scaffold (in progress): While currently optimized for English and Hindi, the architecture supports expansion into other Indian languages to reach broader demographics.

How we built it

  • Frontend: Built with React + TailwindCSS, focusing on readability and intuitive layout. The design is inspired by interfaces like ElevenLabs—sleek, accessible, yet powerful.
  • Backend: Developed with FastAPI for speed and clarity, with endpoints designed to fetch and validate eligibility data.
  • Voice-to-AI Integration: Utilized Google Speech Recognition for real-time transcription, ensuring even voice-only users could access the service.
  • Expressive TTS: Integrated ElevenLabs API to provide emotionally nuanced responses, making the assistant feel warm and trustworthy.
  • RAG Pipeline: Combined semantic search with fine-tuned responses to answer queries like “Am I eligible for Ayushman Bharat?” with both accuracy and grace.
  • Deployment: The entire pipeline is hosted via Streamlit and GitHub, making the tool accessible, lightweight, and easy to iterate on.

Challenges we ran into

  • Government API access was inconsistent. Some endpoints were undocumented; others required crawling through PDF eligibility lists and building scrapers.
  • Voice input integration came with hurdles like empty audio frames, bad segmentation, and low-latency demands—all of which I tackled with careful debugging and speech preprocessing.
  • Creating emotionally resonant TTS meant going beyond robotic tones. Calibrating ElevenLabs’ voice models to express empathy in critical moments was both technically and creatively challenging.
  • Data accuracy and security had to be airtight, especially when dealing with sensitive health information. I implemented role-based access checks, encryption, and input validation.
  • Perhaps most complex of all was translating a deeply emotional, human need into clean, structured code—without losing its soul.

Accomplishments that we're proud of

  • Real-World Origin: The entire system is based on a real-life experience, which kept the design grounded and purpose-driven.
  • End-to-End Pipeline Integration: From voice input to meaningful medical output, Yuddhishtir is a working product—not a mockup. We connected APIs, STT, TTS, and semantic search into one fluid loop.
  • Production-Grade UI/UX: The frontend isn’t just functional—it’s elegant, minimal, and inspired by best-in-class design practices, reflecting compassion through interface.
  • Conversational Memory: Implemented ephemeral memory to maintain a session-level awareness for fluid, multi-step conversations.
  • Expressive Voice AI: Dialing in ElevenLabs to respond with tone and emotion took fine-tuning—and it paid off in making the AI feel trustworthy and human.
  • Deployable, Scalable Build: Hosted on Streamlit and GitHub, it’s already live and shareable, with clear avenues for improvement and iteration.

What we learned

Building Yuddhishtir revealed just how wide the gap is between policy and practicality in real-world healthcare systems. Some key takeaways:

  • Empathy in design is not optional—interfaces must serve users with limited digital fluency, especially elders and underprivileged groups.
  • Government data is rarely developer-friendly. Navigating APIs with inconsistent documentation made me more resilient and resourceful.
  • I also deepened my understanding of healthcare accessibility challenges in India, including the digital divide, misinformation, and fear of navigating institutional systems. Perhaps most importantly, I learned that technology alone doesn't solve problems—clarity, compassion, and context do.

What's next for Yuddhishtir

The project’s mission isn’t complete. If anything, this is just the first chapter:

  • Expanded Scheme Support: Extend the assistant to verify and guide users through other government health and welfare programs (e.g. PMJAY, E-Sanjeevani, Jan Aushadhi).
  • Mobile App Integration: Create a low-bandwidth Android-first app with offline mode and WhatsApp chatbot compatibility.
  • Multilingual Voice Layer: Introduce support for regional languages like Bengali, Tamil, Marathi, and Kannada—with dialect-specific tuning for both input and TTS.
  • OCR-Based Document Analysis: Scan ration cards, Aadhaar cards, or Ayushman cards and return real-time status through AI document parsing.
  • NGO and Clinic Partnership Pilot: Test usability in real-world scenarios by deploying kiosks in local clinics or rural areas, in collaboration with public health organizations.

Built With

  • india?s-flagship-health-insurance-program.-there-was-no-clear-answer
  • no-accessible-interface
  • often-invisible-barriers-to-healthcare-access-feel-painfully-real.-my-grandmother-and-i-once-found-ourselves-frantically-trying-to-determine-whether-she-was-covered-under-the-ayushman-bharat-scheme
Share this project:

Updates