What It Does
MediClarify lets users upload a photo or PDF of any medical test report and instantly receive:
- Plain-language explanations of each biomarker — what it measures, what the result means, and when to be concerned
- Color-coded status indicators (Normal / Borderline / Out of Range) for quick at-a-glance review
- Trend tracking across multiple reports over time, with charts that show whether key values are improving, stable, or worsening
- Shareable summaries users can save and bring to their next doctor's appointment
The AI never diagnoses — it educates. Every explanation is framed as general health information, with clear prompts to consult a healthcare provider for personalized advice.
How We Built It
MediClarify is built on a modern full-stack architecture:
| Layer | Technology |
|---|---|
| Frontend | Next.js 14 (App Router), TypeScript, Tailwind CSS |
| AI Engine | Claude (Anthropic) via the Claude API |
| Storage | Local storage with optional cloud sync |
| Deployment | Vercel |
The core pipeline:
- Upload — User uploads a photo or PDF of their medical report
- Extraction — Claude extracts biomarker names, values, units, and reference ranges from unstructured text
- Explanation — For each result, the AI generates a plain-language explanation tailored for a general adult audience
- Trend Analysis — Multiple reports are compared over time; a linear regression slope $\beta_1$ is computed per biomarker to surface meaningful changes:
$$\hat{y} = \beta_0 + \beta_1 x$$
where $x$ is the normalized timestamp. A positive $\beta_1$ signals a rising trend; negative signals a decline.
- Report — A clean, structured summary is generated for the user to review, save, or share
Challenges We Ran Into
Prompt reliability for structured extraction Medical documents vary wildly in format — printed, handwritten, scanned, photographed. Getting Claude to consistently output valid structured JSON from all these variations required extensive prompt iteration, explicit output schemas, and runtime validation before rendering.
Ambiguous reference ranges "Normal" ranges differ by lab, age, sex, and clinical context. A fasting glucose reading is interpreted differently than a post-meal one. We added context fields to the upload flow (age group, sex, fasting state) so the AI prompt can tailor its interpretation accordingly.
Safety guardrails Defining what the AI should not say was harder than defining what it should. We built a safety layer that detects significantly out-of-range values and surfaces an urgent callout: "This result is significantly outside the normal range. Please contact your healthcare provider promptly." Getting this to trigger reliably — without over-triggering on minor deviations — required careful threshold tuning.
Mobile image quality Many users photograph reports on their phones under poor lighting. We added preprocessing hints to the AI prompt to account for OCR uncertainty and partial text visibility.
Accomplishments That We're Proud Of
- Built a working end-to-end pipeline from raw medical document to plain-language insight in under 48 hours
- Designed a calm, accessible UI that reduces health anxiety rather than amplifying it
- Achieved consistent structured JSON extraction from widely varying real-world medical report formats
- Implemented a responsible AI approach that keeps the tool educational without crossing into diagnosis territory
- Received positive feedback from early testers who said they finally understood their own lab results for the first time
What We Learned
- Medical AI demands careful language. The line between "informing" and "implying a diagnosis" is thin. Every word in the output matters.
- UX is a health intervention. A well-designed interface that is calm, clear, and structured can meaningfully reduce patient anxiety.
- Structured output from LLMs requires defensive engineering. We learned to validate, sanitize, and gracefully handle malformed AI responses rather than assuming perfect output.
- Privacy shapes architecture. Designing with privacy-first principles from day one (transient processing, no default storage of raw health data) changed many of our technical decisions in ways we hadn't anticipated.
What's Next for MediClarify – AI Health Assistant
- Multi-language support — Medical results shouldn't only be understandable in English
- Doctor sharing — Generate a structured PDF summary optimized for clinical handoffs
- Wearable integration — Connect with Apple Health and Google Fit for continuous biomarker monitoring
- Personalized baselines — Learn each user's individual "normal" over time, rather than relying solely on population-level reference ranges
- Specialist context — Tailor explanations based on known conditions (e.g., diabetic context for glucose readings, cardiac context for lipid panels)
MediClarify is an educational tool and does not provide medical advice. Always consult a qualified healthcare provider regarding your health.
Log in or sign up for Devpost to join the conversation.