Inspiration

Patients, caregivers, and even front-line staff keep getting healthcare documents they don’t fully understand — not just discharge notes, but EOBs, prior-auth letters, billing statements, prescriptions, and lab reports. Generic AI chatbots can explain these, but they don’t verify that the person actually understood. We wanted to turn the clinical “teach-back” method into an AI experience that works across clinical, pharmacy, and insurance documents, is installable (PWA), and doesn’t store PHI.

What it does

  • Detects the type of document (prescription, EOB, prior auth, discharge, lab) and falls back to General if it’s something else.
  • Simplifies the text to 6th–8th grade level using Gemini with Structured Output.
  • Creates a 3–5 question quiz tied to the exact content in the document.
  • If the user gets something wrong, it reteaches just that part and lets them retry.
  • Shows a context-specific panel (e.g. EOB math, prescription dose/timing, discharge follow-ups).
  • Has three modes: Teach-Back, Chat Helper, and Live Q&A (voice).
  • Runs as a PWA so it can be installed and show the tutorial offline.
  • Tracks only local, anonymous metrics (unknown docs, user overrides, mastery count).

How we built it

  • Google AI Studio (Gemini) for classification + generation with Structured Output.
  • React + TypeScript + Vite for a fast, component-based UI.
  • PWA using manifest + service worker (vite-plugin-pwa) for installability.
  • LocalStorage for privacy-friendly telemetry (no PHI stored).
  • Accessibility-first layout: help modal, disclaimer modal, keyboard-friendly controls.

Challenges we ran into

  • Preventing the model from forcing a wrong category when the document is out of scope → we added an explicit unknown path and a user override.
  • Keeping the JSON strict so the UI can render the right domain card every time.
  • Balancing PWA caching with not caching potentially sensitive responses.
  • Designing one flow that works for clinical docs and payer/insurance docs.

Accomplishments that we're proud of

  • One app now handles clinical, pharmacy, and insurance documents in a single UI.
  • We don’t just explain — we measure comprehension (attempts + time-to-mastery).
  • Voice/Live mode makes it usable for low-literacy or vision-limited users.
  • Fully browser-based and deployable from Google AI Studio → good for demos, pilots, hackathons.
  • We can show when the model was uncertain (unknown + override counters).

What we learned

  • The real differentiator over generic chatbots is verification (teach-back), not simplification.
  • A tiny “we’re not sure” banner + user override prevents most misclassifications.
  • Structured Output is the key to adding more health document types later.
  • PWA + AI Studio is enough to ship something real without standing up a backend.

What's next for Teach Engine

  • Add more document families (imaging, consent, therapy homework, community-health flyers).
  • Export to FHIR-safe summaries for EHR/payer portals.
  • Run a small usability study and publish it as a perspective/short paper.
  • Add a provider/admin view to surface common patient misunderstandings by document type.

Built With

  • accessibility
  • ai/ml
  • cloud-run
  • css3
  • gemini-2.5-flash
  • gemini-live-api
  • google-ai-studio
  • healthcare
  • html5
  • javascript
  • localstorage
  • pwa
  • react
  • tailwind-css
  • typescript
  • vite
  • vite-plugin-pwa
  • voice-ai
  • workbox
Share this project:

Updates