Inspiration

One of our cousins is a doctor. She also has endometriosis. It took her four years to get diagnosed, and she spent most of those years in the UK healthcare system, describing the same symptoms at appointment after appointment. Severe pain that stopped her from going to work. Pain that came back mid-cycle, not just during her period. She was told it was normal. She was told to try ibuprofen and come back if it got worse. It wasn't until she pushed for a referral herself, armed with her own notes and a lot of frustration, that anything moved.

The thing is, she knew what she was looking at. She's a doctor. And it still took four years.

She's not an outlier. The average diagnostic delay for endometriosis in Canada is 5 to 10 years from symptom onset, and in most of those years, the patient has already seen a doctor. The problem isn't always a lack of medical knowledge. It's that patients don't walk into those appointments with organized, longitudinal documentation. Pain that feels undeniable in the moment becomes vague in a five-minute GP visit: how long has this been happening? How bad is it, really? Does it actually affect your daily life? Without clear answers backed by months of data, the clinical guidelines that should trigger investigation don't get triggered.

ACOG's Clinical Practice Guideline No. 11, published March 2026, names clinician dismissal and symptom misattribution as documented drivers of that delay. It also establishes that a clinical diagnosis from symptoms alone is sufficient. Laparoscopy is no longer required first, and a normal ultrasound does not rule out endometriosis. The evidence exists. The gap is in the room.

We built Flare to close that gap.


What it does

Flare has two flows.

Daily journaling. When a user is in pain, she describes how she's feeling in plain language or dictates via voice. An AI reads the entry and asks one targeted follow-up question: did this affect your ability to work or go to school? Is this mid-cycle pain something that's happened before? Every entry is stored with its cycle day and severity score so patterns build across months, not just days. After saving, the app immediately checks whether the new entry contributes to a recurring clinical pattern and flags it if so. No LLM call, no wait.

Appointment prep. Before a GP visit, Flare analyzes the full symptom history, retrieves semantically relevant entries and matching clinical guideline text, and generates two things: a formatted GP brief ready to hand to a doctor, and advocate scripts for the most likely dismissal scenarios the patient will face.

Each script has three parts: what to understand before walking in, what to actually say out loud (one calm sentence a real person can deliver while sitting in a doctor's office feeling dismissed and anxious), and what to ask if they're still dismissed. A referral, a second opinion, a note in their chart.

The app never diagnoses. Every pattern flag says the same thing: this is worth discussing with your doctor.


How we built it

Flare runs in Expo (React Native) on iOS and Android without native builds.

Storage is split into two layers. A lightweight local index in AsyncStorage holds just enough data to render the home screen and run pattern detection instantly, no API call needed. Full entry text goes to Moorcheh, which handles semantic embeddings automatically across two namespaces: flare-health-entries for patient symptom logs, and flare-health-guidelines for excerpts from SOGC and ACOG CPG No. 11. This separation matters. When the GP brief pipeline runs, it retrieves semantically relevant patient entries alongside matching clinical guideline text before the LLM sees anything. The output is grounded in both the patient's actual data and the clinical standards her doctor is supposed to follow.

The LLM pipeline runs in two turns within a single conversation. Turn 1 analyzes the retrieved entries and identifies cross-cycle patterns. Turn 2, using the same context window, generates the GP brief and advocate scripts. Because it's one continuous conversation, the model that found the patterns is the same one writing the brief. No context loss, no stitching.

LLM calls are proxied through a Cloudflare Worker. Pattern detection on the home screen runs entirely on-device with no LLM, a local heuristic that checks the entry index in milliseconds after every save.


Challenges

"Not a diagnosis" is harder than it sounds. Every piece of output is a potential liability if it reads as conclusive. We rewrote the system prompt multiple times to find language that feels medically meaningful to a GP without telling a patient she has a condition. "This pattern is consistent with what clinical guidelines flag for further investigation" does a lot of quiet work.

Calibrating thresholds. The local pattern detector and the LLM both need to flag the same things, or the app feels inconsistent. We found the model defaulting to 6/10 as its severity threshold while both SOGC and ACOG use 7/10 for functionally disruptive dysmenorrhea. Explicit threshold rules had to go directly into the system prompt.

Designing for a nervous patient. The advocate scripts needed to pass one test: could someone actually say this out loud while sitting in a GP's office, already feeling dismissed? Early drafts were too clinical, too long, too citation-heavy. We stripped them down until each response was one sentence. That took longer than any of the technical work.


What we learned

Grounding LLM output in real patient data, specific dates, specific severity scores, specific days missed, changes the quality of what gets generated more than any prompt engineering does. The GP brief doesn't say "the patient experiences severe pain." It says "severity rated 8/10 on cycle day 2, with two days of missed activities, across three consecutive cycles." That's a document a doctor can act on.

The emotional design of a medical tool matters as much as its technical accuracy. Every line of copy is a decision about whether a patient feels seen or dismissed by the app, before she even gets to the appointment.


What's next

  • Worsening trend detection, because pain increasing cycle-over-cycle is clinically more urgent than stable high severity and should be surfaced differently
  • Per-user data scoping and backend API key proxying for production
  • Privacy and compliance review under PIPEDA and PHIPA

Tracks

Sun Life — Best Health Care Hack Using Agentic AI

Flare is built specifically for Canadians navigating a healthcare system with a documented, multi-year gap between symptom onset and diagnosis for pelvic pain conditions. The agentic architecture is genuine: a follow-up agent decides what to ask based on what a patient just described, a pattern analysis agent identifies clinically significant cross-cycle signals calibrated to SOGC and ACOG thresholds, and a brief-generation agent produces output grounded in Canadian clinical guidelines. The agents don't just respond. They orchestrate a preparation workflow that ends with a patient walking into a GP appointment with longitudinal evidence and practiced responses. Symptom tracking and treatment pathway support, for a specific underserved condition affecting roughly one in ten Canadian women.

Moorcheh AI — Best AI Application that Leverages Efficient Memory

Moorcheh is load-bearing infrastructure in Flare, not a bolt-on. Patient entries are stored in a flare-health-entries semantic namespace, with cycle day and severity prepended to each entry so embeddings capture both the timing and intensity of each symptom event. A second namespace, flare-health-guidelines, holds excerpts from SOGC guidelines and ACOG CPG No. 11 (2026). When the GP brief pipeline runs, answerWithClinicalContext queries both namespaces simultaneously, retrieving relevant patient entries alongside matching clinical guideline text before the LLM sees anything. The result is grounded: specific dates, specific severity scores, guideline citations that map to what the patient actually logged. Without Moorcheh's semantic retrieval, the GP brief is a generic summary. With it, it's a document that speaks the same language as the guidelines a GP is supposed to follow.

Built With

Share this project:

Updates