Inspiration

Aasha is a woman's name. In Sanskrit, it means hope.

We chose it deliberately — because the women this system is built for don't lack medical knowledge or capable caregivers. They lack proximity. A midwife responsible for hundreds of patients across villages miles apart cannot be everywhere at once. Between visits, warning signs appear quietly: a persistent headache, sudden swelling, a baby that stopped moving. By the time someone notices, the window has often closed.

The numbers are stark. Maternal mortality in low-resource settings accounts for roughly 94% of all global maternal deaths, despite being largely preventable. The leading causes — preeclampsia, postpartum hemorrhage, sepsis — are all detectable early if someone is asking the right questions at the right time.

We wanted to build the system that asks those questions.


What We Built

Aasha monitors pregnant and postpartum women via SMS. No smartphone, no app, no internet required on the patient side — just a basic feature phone.

Three times a week, patients receive a structured check-in over text. They reply with numbers. On the backend, a clinical reasoning pipeline assembles the patient's full history, retrieves relevant protocol chunks from a corpus of WHO and FIGO guidelines using Moorcheh AI's semantic search, and passes everything to Claude for a structured risk assessment. Patients are assigned a tier:

$$\text{Tier} \in {0\ (\text{Normal}),\ 1\ (\text{Watch}),\ 2\ (\text{Concern}),\ 3\ (\text{Emergency})}$$

A Tier 3 assessment fires simultaneous SMS alerts to the patient, their community health worker, and the receiving facility — and starts a follow-up loop every 10 minutes until the event is resolved.

Community health workers see everything through a live React dashboard: patient cards sorted by urgency, clinical reasoning with protocol citations, symptom timelines, and a full patient enrollment flow. The dashboard polls every 30 seconds and works on mobile.


How We Built It

The stack: FastAPI for the backend, Supabase for the database, Twilio for SMS, Moorcheh AI for clinical protocol retrieval, Claude for reasoning, and React + Tailwind for the dashboard.

The most deliberate architectural decision was replacing a traditional RAG pipeline (pgvector + OpenAI embeddings + LangChain chunking) with Moorcheh AI's serverless semantic search. This eliminated three infrastructure dependencies and reduced the retrieval step to a single SDK call — freeing us to focus on the clinical logic that actually matters.

The SMS layer is a conversation state machine: each active check-in is a row in conversation_state, and every inbound message advances the node. Free-text responses that can't be parsed numerically are classified by Claude Haiku before the conversation continues.


Challenges

Building a reliable clinical pipeline under time pressure. Claude's output needed to be valid, parseable JSON every time — in a context where a parse failure defaults to a Tier 2 alert affecting a real patient. We built retry logic and a conservative fallback, but getting the prompt structure right to consistently produce clean structured output took significant iteration.

The SMS state machine edge cases. Patients reply late, out of order, or with unexpected text. The conversation can expire mid-check-in. Handling every failure mode gracefully — without dropping a patient's data or sending them a confusing message — required more branches than we anticipated.

Coordinate space in WebGL. A smaller problem, but a surprisingly deep rabbit hole: the webgl-fluid-enhanced library overrides container positioning at runtime and uses a coordinate system where the Y-axis originates at the bottom-left. Getting the cursor trail to actually follow the mouse took longer than it should have.


What We Learned

That the hardest part of building for low-resource healthcare isn't the AI. It's the constraints: no smartphones, intermittent connectivity, patients who may be semi-literate, health workers managing hundreds of cases on a small screen.

Usability was a first-class consideration throughout. For patients, that meant reducing every check-in to single-digit replies — no typing, no literacy required beyond recognising numbers. For health workers, it meant a dashboard that surfaces the most urgent patients immediately, with clinical reasoning that's readable at a glance rather than buried in raw model output. A system that requires training to use won't be used. We kept that pressure on every design decision.

Built With

Share this project:

Updates