A stack of lab reports and in the supplement aisle completely lost unsure which products were right for the user given everything in those results. The information existed. It just wasn't there when we needed it. That gap, between having a medical report and being able to act on it in a real moment , is what HealthLens is built to close.
We built a two-stage AI pipeline. A vision model reads whatever you scan — a nutritional label, a grocery list — and extracts it as structured data. That gets cross-referenced against your living health profile, built from your own lab reports and doctor's notes, and Claude reasons over both to tell you exactly what helps your deficiencies, what hurts them, and how much you should consume. In family mode, it handles everyone at once and flags conflicts.
The hardest part wasn't the code — it was prompt reliability. Getting LLMs to return consistent, structured JSON across wildly different inputs required far more iteration than we expected, and building the stripping and validation layer between every LLM call taught us that prompt engineering for structured output is a discipline in itself.
Log in or sign up for Devpost to join the conversation.