Inspiration
Amara is 34 years old. She lives in rural Western Kenya. She has been bleeding after sex for six weeks.
Her community health worker visited her yesterday. For the first time, he had a tool that knew what to do.
Cervical cancer kills 342,000 women every year. 18 of the 20 countries with the highest burden are in Sub-Saharan Africa. In Kenya, it is the single leading cause of cancer death in women. In Uganda, fewer than 5% of women have ever been screened.
The bottleneck is not treatment. It is detection.
Community health workers are already in these villages. They have phones. They have relationships. What they have never had is a clinical tool that works on the device they carry, in the language they speak, without training, without a data plan, without an app.
We built ASHA for Amara's CHW.
And we built it in alignment with Cancer Aid Society India's GoodBye Tobacco programme — targeting the oral cancer burden driven by betel quid and pan masala in South Asia — and with WHO's Global Cervical Cancer Elimination Initiative targets for 2030.
What it does
ASHA is live right now. You can test it at asha-gnec.vercel.app. A community health worker anywhere in the world can open WhatsApp, describe a patient in plain language, and receive a professional clinical referral letter in under 90 seconds. No app. No training. Any phone. $0 infrastructure cost.
Here is what that looks like in practice:
CHW types: "34yr old, smoker, bleeds after sex, 2 pregnancies, no IUD, no STDs"
ASHA replies: HIGH RISK · 79% · Postcoital bleeding (WHO Grade A indicator) · Refer immediately to Kisumu Clinic
That exchange took 4 seconds. The referral letter — dated, addressed to an attending clinician, with the CHW's phone number as contact — arrived in the same conversation.
Cervical Cancer Screening
A CHW describes the patient in plain language — English, Swahili, or Hindi. ASHA extracts clinical fields using function calling on Groq's llama-3.3-70b, runs them through an XGBoost ML model with WHO clinical override rules, and returns HIGH / ELEVATED / LOW risk with a probability score. For HIGH and ELEVATED cases, a quality-validated referral letter is generated and delivered in under 90 seconds.
The model is constrained to 7 CHW-answerable fields — not because we couldn't use more, but because a CHW in rural Kenya cannot answer questions about HPV genotyping or Hinselmann test results. Clinical usefulness was the constraint. Sensitivity = 1.0 was the requirement.
Oral Cancer Screening
A 7-question WHO-weighted scoring engine covering tobacco use, betel quid/areca nut, oral lesions, leukoplakia/erythroplakia, dysphagia, and hygiene. Betel quid carries the highest weight in the engine — it is a Group 1 IARC carcinogen and the primary driver of OSCC in South Asia. This directly targets the population Cancer Aid Society India's GoodBye Tobacco programme serves.
Survivorship Support
Cancer survivors are registered once by their CHW via WhatsApp. ASHA sends weekly check-ins collecting fatigue, pain, and mood scores (1–10). It generates a personalised integrative recovery protocol — Yoga Nidra, Pranayama, Ayurvedic recommendations — in the patient's language. If fatigue worsens across 3 consecutive weeks, an escalation alert surfaces on the NGO dashboard. Priya Mehta, a cervical cancer survivor in rural Maharashtra, is on Week 4. Her fatigue dropped from 8 to 3. Her mood is rising.
Referral Follow-Through Tracking
The feature no other mHealth system has. Seven days after generating a referral, ASHA automatically asks the CHW: did the patient attend the clinic? The response is logged. The NGO dashboard shows a live referral completion rate against the WHO 70% target. Referral completion rates in Sub-Saharan Africa average 30%. ASHA closes the loop that every other system leaves open.
Live NGO Dashboard
A real-time operational dashboard for programme managers. Live patient feed powered by Supabase realtime. Africa/South Asia cancer burden map with WHO GLOBOCAN 2020 data — hover Kenya to see 3,591 deaths/year and 16% ever screened. Urgent case queue. Weekly screening trends. Referral completion analytics. $0 infrastructure at scale.
Web Interfaces
Beyond WhatsApp: a mobile tap-based screening interface at /screen where every question is a tap — no keyboard required, designed for low-literacy CHWs on old Android phones. A web chat at /chat with voice input, offline message queue, and PDF referral download. Both connect to the same clinical pipeline as the WhatsApp channel.
How we built it
This was built entirely by one person. No team. No division of labour. Every line of code, every clinical decision, every design choice — solo.
Backend: FastAPI on Render. 5-agent pipeline with phase-based session routing. Per-phone async locking prevents concurrent session corruption. APScheduler runs weekly survivorship check-ins and daily referral follow-up prompts. Twilio signature validation with graceful status callback handling.
AI/ML layer:
- XGBoost cervical classifier — UCI dataset (858 patients), SMOTE balanced, isotonic calibrated, threshold 0.039 tuned for sensitivity = 1.0
- Clinical override layer — WHO Grade A criteria hardcoded above the model. Postcoital bleeding alone triggers HIGH regardless of model output.
- WHO oral cancer weighted engine — weights sourced from GLOBOCAN 2020 risk attribution and IARC Monograph Vol. 100E
paraphrase-multilingual-MiniLM-L12-v2— multilingual symptom mapper across 129 phrasings in EN/SW/HI- spaCy NER PII scrubber — runs on every message before the LLM sees it Frontend: Next.js 14 on Vercel. Dual design system — dark emotional surfaces for the product landing, clinical light theme for NGO operational pages. Africa/South Asia SVG burden map with GLOBOCAN 2020 data. jsPDF for A4 clinical referral letters. Offline queue in localStorage for CHWs who lose connectivity mid-screening.
Data: Supabase Postgres — 6 tables, row-level security, realtime subscriptions.
Channel: Twilio WhatsApp Business API.
Language routing: +91 → Hindi (Devanagari, medical terms in English). +254/+255 → Swahili. Others → English. Referral letters, survivorship protocols, and check-in responses all generated in the CHW's language.
Challenges we ran into
Building for real users, not demo users.
The first version of the screening agent asked questions one at a time — clean, predictable, easy to test. But a real CHW in the field might describe a patient in one message: "34yr old, smoker, 3 kids, bleeds after sex." We rebuilt the agent using function calling with a required-field completion guard — it extracts every field present in the message, asks only for what's missing, and refuses to complete until the schema is satisfied. The demo moment where a CHW sends one sentence and gets a HIGH RISK result in under 3 seconds — that required weeks of architecture work to make reliable.
The model was trained in Venezuela. It deploys in Kenya.
The UCI cervical cancer dataset was collected at Hospital Universitario de Caracas. Our deployment targets Sub-Saharan Africa and South Asia — different HPV prevalence, different risk factor distributions, different populations entirely. Rather than pretend the model generalises perfectly, we built a clinical override layer anchored to WHO and IARC criteria. These rules are population-agnostic. Postcoital bleeding in Kenya carries the same clinical weight as it does anywhere in the world. The overrides are the safety net.
Referral letters that lie.
Early versions invented symptoms. A letter claiming a patient had "pelvic pain and irregular menstruation" when she only reported postcoital bleeding is not a referral letter — it's a liability. We added a quality validation loop: Groq evaluates its own output against the structured patient_data payload and regenerates if quality scores below 7/10. Letters now score 8.4–9.3/10 consistently. Every letter is dated, addressed, and grounded only in what the patient actually reported.
Accomplishments that we're proud of
One person built a deployed clinical platform. Not a mockup. Not a prototype. A working WhatsApp pipeline that has generated real referral letters with real dates, real CHW phone numbers, and real clinical content — letters a clinic would actually accept. Built solo, in weeks, on free infrastructure.
We closed a loop that nobody else closes. Every mHealth screening tool generates referrals and then stops. Nobody knows if Amara went to the clinic. ASHA follows up 7 days later, logs the CHW's response, and surfaces the completion rate on the dashboard. That single feature turns a screening tool into a care continuity platform. It changes what NGOs can measure.
The product speaks three languages, not two. Hindi and Swahili are not UI labels. When a CHW in Maharashtra (+91) completes a screening, the referral letter arrives in Hindi — Devanagari script, medical terms in English in parentheses. When Priya's weekly check-in is due, her protocol — Yoga Nidra, Anulom Vilom, Ashwagandha — arrives in Hindi. This is not a translation feature. It is clinical communication in the patient's language.
$0. At any scale. The entire stack runs on free tiers. Groq free tier supports thousands of screenings per month. Render free tier hosts the backend. Vercel free tier hosts the frontend. A 50-CHW network across Kenya and Nigeria costs nothing to operate. That is not an accident. It was an explicit design constraint from day one.
What we learned
Version 1 was a chatbot. Version 2 is a clinical tool.
The difference is the clinical override layer. In version 1, the XGBoost model made every decision. In version 2, postcoital bleeding — a WHO Grade A indicator — triggers HIGH regardless of what the model says. That change came from reading WHO's Guide to Cancer Early Diagnosis at 2am and realising that no machine learning model trained on 858 patients from Venezuela should be the sole arbiter of whether a woman in rural Kenya gets referred. The model scores probability. The WHO criteria guarantee safety.
The referral is not the endpoint. It is the beginning.
We designed ASHA as a screening tool. Halfway through, we asked: what happens after the letter is sent? The answer, for most mHealth programmes, is: nobody knows. The patient either went to the clinic or she didn't, and the system has no record either way. That question became the follow-through tracker. Asking one yes/no question 7 days later turns a screening event into a care outcome — and gives NGO supervisors the first honest metric they've ever had for referral completion.
Building alone forces clarity.
Every architectural decision had to be defensible to one person: me. There was no team meeting to defer to, no "let's try both approaches." The phase-based session routing, the 7-field model constraint, the clinical override layer, the quality validation loop — each was a choice made under constraint, explained in writing to myself before implementing. The result is a codebase that is smaller and more coherent than it would have been with a team.
What's next for ASHA
Deploy through GNEC's network. GNEC has 1,600 subsidiary NGOs. ASHA requires no infrastructure investment, no app installation, and no training. A pilot with 50 CHWs in Kenya and Nigeria through GNEC's partner network could begin this week. That is not a roadmap item. It is a specific ask. If GNEC wants to run a pilot, ASHA is ready.
Field validation with Cancer Aid Society India. Deploy to 20 CHWs in Maharashtra through the GoodBye Tobacco programme. Collect real screening data on betel quid users. Validate the oral cancer scoring engine against actual biopsy outcomes. Move from clinically defensible to evidence-backed.
EfficientNet-B0 oral lesion classifier. Train a fine-tuned image classifier on annotated oral cavity photographs. Deploy as the authoritative model behind the photo analysis feature in the web chat. Replace the current Groq vision layer with a purpose-built clinical model.
SMS fallback channel. The same Twilio account receives SMS. A CHW with a feature phone and no data plan gets the same pipeline — shorter responses, same clinical output. This is the last mile of accessibility.
CHW performance analytics. ASHA already collects per-CHW screening rates, high-risk detection rates, and follow-through rates. Surface them on the dashboard. Give NGO supervisors the ability to identify CHWs who need coaching before missed cases accumulate.
ASHA is live. Test it now: asha-gnec.vercel.app
Cancer Aid Society India · WHO Protocol Aligned · SDG 3.1 · 3.4 · 3.8 वसुधैव कुटुम्बकम् — The world is one family
Built With
- apscheduler
- fastapi
- groq
- jspdf
- next.js
- python
- railway
- react
- scikit-learn
- sentencetransformers
- smote
- spacy
- supabase
- twilio
- typescript
- vercel
- xgboost


Log in or sign up for Devpost to join the conversation.