Inspiration

Millions of people rely on the internet to self-diagnose when facing unfamiliar symptoms, often resulting in misinformation, anxiety, or delayed care. Existing tools are either too generic, hard to understand, or not multilingual. Our goal was to build a system that feels like a personal doctor—available 24/7, capable of understanding free-form symptoms in natural language, and suggesting actionable next steps.

We envisioned a solution that could intelligently interpret symptoms using a Large Language Model (LLM), factor in medical reasoning, and communicate in multiple languages through voice or text. This became the foundation of MediCore AI’s Symptom Checker.


What it does

MediCore AI’s AI-powered Symptom Checker allows users to describe their symptoms naturally—no forms, no dropdowns. Users simply type or speak symptoms like "I have a burning throat and mild fever", and the LLM interprets it.

Key Features:

  • Symptom Interpretation: Maps symptoms to medical terms and body systems using LLM + medical ontologies.
  • Probable Conditions: Suggests possible diagnoses (e.g., strep throat, sinus infection) with explanations.
  • Urgency Classification: Evaluates risk levels (Mild, Moderate, Critical) using contextual reasoning.
  • Multilingual Support: Understands inputs and provides outputs in English, Hindi, Spanish, and French.
  • Voice Interaction: Users can speak symptoms, and the system responds with voice feedback.

Other Capabilities:

  • AI triage for emergency detection
  • Report upload and test value interpretation
  • Family health tracking and medication reminders
  • Appointment booking with smart validations

How we built it

We used a hybrid architecture combining frontend UX, backend APIs, and an LLM-powered reasoning engine.

Frontend:

  • Next.js 14 (App Router) for routing and performance
  • React + Tailwind CSS for responsive UI
  • Web Speech API for multilingual speech-to-text and text-to-speech
  • Radix UI for accessibility and dialog components

Backend:

  • Google Gemini LLM used with system prompts tailored for triage, condition prediction, and urgency analysis
  • Custom symptom-condition mapping layer using curated medical datasets (e.g., Symptoma API, public disease graphs)
  • Risk classification model blends rule-based logic + LLM insights
  • APIs: /chat, /triage-assessment, /analyze-test-results, and more

Symptom Checker Workflow:

  1. Input (text or voice) is normalized and passed to the LLM with few-shot examples.
  2. LLM extracts symptoms, maps them to body systems and possible diseases.
  3. Severity score is computed and LLM adjusts response tone (urgent vs. calm).
  4. Output is returned via stream for fast feedback.

Challenges we ran into

  • LLM hallucinations: At times, LLMs suggested rare or unrelated conditions. We added a validation layer using symptom-condition pairs.
  • Multilingual Input Parsing: Mapping colloquial phrases to medical entities required normalization and custom translation memory.
  • Speed vs. Intelligence: Streaming responses required optimizing prompt size without compromising reasoning quality.
  • Risk Classification: Designing a consistent triage scale that balances safety and clarity was tricky.
  • Voice Interruptions: Managing accurate real-time voice interactions across different languages and accents.

Accomplishments that we're proud of

  • Built a context-aware, multilingual symptom checker powered by LLMs and medical data.
  • Enabled real-time voice interaction for a hands-free medical experience.
  • Delivered offline PWA capabilities for usage in rural or low-connectivity areas.
  • Integrated secure local health records with export and sharing features.
  • Created a modular system where more LLM-based medical flows (like prescriptions or diagnosis summaries) can be added.

What we learned

  • The importance of prompt design and structured memory when using LLMs in medical domains.
  • How to balance free-form user input with structured backend logic for reliable outputs.
  • How to combine LLMs with external medical knowledge graphs to ground responses.
  • Building for multi-lingual and voice-first interfaces requires constant real-world testing.
  • Learned best practices in triage UX design, trust-building in AI interfaces, and communicating health data transparently.
  • Realized the limits of AI in healthcare—and how to design disclaimers and guidance responsibly.

What's next for MediCore AI

  • Fine-tune a domain-specific LLM model trained on real clinical notes and medical Q&A corpora.
  • Add image-based diagnosis (e.g., detect rashes or wounds from photos).
  • Introduce reasoning transparency, allowing users to see how and why the AI made a certain suggestion.
  • Expand to more regional Indian languages like Telugu, Tamil, and Bengali.
  • Integrate with EHR systems for physician-facing dashboards.
  • Enable live escalation to human doctors for urgent cases with AI-generated summaries.
  • Publish an open API for developers to embed the symptom checker into clinics and health apps.

Built With

Share this project:

Updates