Swastha स्वस्था — A Woman Stable in Her Own Health

Inspiration

14,000 mothers die and 8 million women and kids face long term health problems from preventable pregnancy complications every year in rural India. Not because medicine can't save them — but because the information doesn't reach them in time. A woman in rural India experiencing headaches and swelling at 7 months pregnant doesn't know these are warning signs of pre-eclampsia — a condition that kills in hours. She lives 30km from the nearest hospital, and no one has told her that the government will pay for her treatment under JSSK.

India already has one of the world's largest community health worker programme — over a million ASHA workers walking through villages every day. But they receive only 23 days of training, earn $25 a month, and have zero clinical decision support tools. They know something is wrong but can't tell how urgent it is.

94% of rural Indian households have a phone. The infrastructure to reach these women exists. What was missing was something useful on that phone.

What it does

Swastha is a voice-first AI health navigator with two modes:

For women: Speak or type symptoms in any of 13 Indian languages. The app asks questions one at a time, reads each aloud, and produces a traffic-light risk assessment (green/amber/red) with plain-language guidance, warning signs to watch for, free government schemes, charity support links, nearest hospital finder, and follow-up check-in reminders.

For ASHA workers: Structured patient intake with auto-generated clinical notes (Presentation, Assessment, Plan, Follow-up), patient tracking dashboard sorted by risk level, and session logging across visits.

Crisis detection runs client-side before any AI — if someone mentions self-harm, seizures, or heavy bleeding, the AI is bypassed entirely and they get an immediate emergency screen with one-tap call to 112.

How we built it

Single-page React application running on Vite, designed to work entirely in the browser with no backend.

The AI uses a three-agent swarm architecture on Claude Sonnet 4:

  1. Symptom Analyst — evaluates symptoms against patient context (pregnancy status, age, medications, duration)
  2. Support Navigator — matches to real Indian charities (ARMMAN, SNEHA, CARE India) and government schemes (JSSK, Ayushman Bharat, PMMVY)
  3. Orchestrator — synthesises both into a single risk-graded assessment with clinical notes

Agents 1 and 2 run in parallel for speed. The orchestrator combines them. A fourth translation agent handles multilingual output.

Voice input uses the Web Speech API. Text-to-speech reads every question and result aloud. Crisis detection uses client-side keyword matching — no API call, no delay. Storage uses localStorage for the prototype (Supabase for production).

Every design decision was made for accessibility: voice-first for low-literacy users, one question at a time instead of forms, traffic light colours that are universally understood, large touch targets for basic phones, no login required for users.

Challenges we faced

Microphone access in sandboxed environments. The Claude.ai artifact iframe blocks microphone permissions. We solved this by deploying to localhost via Vite where the Web Speech API works natively.

API calls from the browser. Direct browser-to-Anthropic API calls require the anthropic-dangerous-direct-browser-access header and correct CORS setup. Getting this working across artifact, StackBlitz, and localhost required different auth approaches for each environment.

Prompt calibration for safety. The biggest risk with a health triage tool is under-triaging — telling someone they're fine when they're not. We calibrated all prompts to score higher when uncertain, and added warning signs to every risk level including green. Getting the AI to never use familial terms ("sister", "behen") and never diagnose required explicit prompt constraints.

JSON parsing from LLM output. Claude occasionally truncates JSON or wraps it in markdown. We built a robust parser with brace-counting, truncation repair, and markdown stripping that handles malformed output gracefully.

Designing for illiteracy. Most health apps assume the user can read. Ours can't assume that. Every interaction had to work through voice alone — which meant rethinking the entire UX away from forms, menus, and text-heavy interfaces toward a conversational, one-question-at-a-time flow.

What we learned

The medical knowledge to prevent maternal deaths already exists. The infrastructure of a million health workers already exists. The phones already exist. The government schemes that pay for treatment already exist. The gap is information — getting the right knowledge to the right woman at the right moment. AI can be that bridge, but only if it's built for the people who actually need it, not for people who already have access to healthcare.

What's next

  • Pilot with 50 ASHA workers in one district in Rajasthan or Bihar
  • Supabase backend for persistent patient records
  • Offline mode for areas with poor connectivity
  • WhatsApp bot version for even wider reach
  • Validation study comparing triage accuracy against clinical standards
  • Expansion to Bangladesh, Kenya, Nigeria — countries with similar community health worker programmes

Built With

Share this project:

Updates