Anchor — AI holds the memory so humans can hold each other
What Inspired Us
My grandfather is forgetting. Not all at once — slowly, in the way Alzheimer's actually works. A name here. A face there. And every time he forgets, someone in my family has to stop what they're doing to fill the gap.
That's the part nobody talks about. The patient loses their memories. The family loses their time. Both losses are invisible in the data — but they compound every single day.
There are 55 million people living with Alzheimer's globally, a number projected to reach 139 million by 2050. For every patient, research estimates 3–5 family caregivers absorbing an average of 47 hours per week in unpaid care. That is not a rounding error. That is entire careers, relationships, and lives quietly sacrificed.
We built Anchor because we lived this problem before we tried to solve it.
What Anchor Does
Anchor is a WhatsApp-native AI companion that does three things no human caregiver can do continuously:
1. Holds the patient's memory with perfect fidelity Patients and families contribute memories — names, faces, places, feelings, small details — through simple voice or text on WhatsApp. Anchor stores them with the granularity that matters to that specific person. No app to learn. No interface to navigate. Just WhatsApp, which they already know.
2. Gives patients their mornings back Every morning, Anchor sends a warm personalized brief: today's date, who they'll see, what happened yesterday, their medications, and the people who love them. Before the fog sets in. Before the day takes it away.
3. Frees the family from coordination overhead Families stop repeating themselves across phone calls. Doctors push medication schedules once. Everyone gets the signal they need without the manual labor of keeping everyone else informed. Anchor is the coordination layer the family never had.
The result: patients live with more dignity. Families recover hours they can spend with their loved one instead of managing them.
$$\text{Caregiver hours recovered} = H_{coord} \times N_{caregivers} \times 365$$
Where $H_{coord}$ is the estimated daily coordination overhead per caregiver — conservatively 1.5 hours/day — and $N_{caregivers}$ is the average care network size of 4 people. That is 2,190 hours per patient per year that Anchor meaningfully compresses.
At 55 million patients globally, the total addressable productivity loss is:
$$55{,}000{,}000 \times 2{,}190 \approx 120 \text{ billion caregiver hours per year}$$
Anchor does not solve Alzheimer's. It holds the person steady while the people around them get their lives back.
How We Built It
Development Approach
We are three non-engineers — one investor, two product managers. No one on this team writes production code for a living. So we built Anchor the way we believe all future software will be built: through a layered AI-assisted workflow.
We used Claude to generate four comprehensive PRDs — product, technical, schema, and a phase-by-phase development plan detailed enough to execute with minimal human intervention. Claude Code then implemented the core codebase directly from those PRDs. Cascade handled cost-effective testing, debugging, and iterative improvements on the existing codebase.
The irony is intentional: we used AI to build a product about what AI can hold that humans can't. And the constraint made us better builders. Every decision had to be justified by user value, not engineering preference.
Architecture
The system follows a modular design with six distinct layers:
- Channels — WhatsApp Cloud API (Meta Graph API v19+) handling inbound and outbound voice and text
- Routing — phone number-based role detection, dispatching to patient, family, or doctor workflows
- Agents — Claude (
claude-sonnet-4-6) powering memory compaction, grounded retrieval, morning brief composition, and intent classification - Memory Storage — SQLite via SQLAlchemy 2.x, partitioned strictly by patient
- Safety — geofencing via
geopyhaversine calculation, exit detection, proactive family alerts - Scheduling — APScheduler cron jobs for morning briefs, geofence polling, and medication reminders
Tech Stack
| Layer | Technology |
|---|---|
| Language & Framework | Python 3.11+, FastAPI, Uvicorn |
| LLM | Anthropic Claude claude-sonnet-4-6 |
| Speech-to-text | OpenAI Whisper (faster-whisper) |
| Messaging | WhatsApp Cloud API (Meta Graph API v19+) |
| Database | SQLite + SQLAlchemy 2.x |
| Scheduling | APScheduler |
| Geo | geopy (haversine) |
| Testing | pytest, pytest-asyncio, httpx |
| Config | pydantic-settings |
| Tunneling | ngrok (demo webhook exposure) |
What We Learned
Three things surprised us.
First: the PRD is the product. When documentation is detailed enough to be executable, the gap between idea and working software collapses. We spent more time writing the PRD than debugging the code — and that was the right tradeoff.
Second: WhatsApp is the right interface for this user. We debated building a web dashboard. We dropped it in the first hour. Alzheimer's patients don't learn new apps. Their families are already on WhatsApp. Meeting users where they already live is not a design choice — it's an ethical one.
Third: the hardest problem in AI-assisted caregiving is not capability — it's trust. Anchor must never hallucinate a memory. It must never invent a medication schedule. The moment a family member doubts what Anchor tells them, the product fails. Grounding every retrieval strictly in stored memories — and saying "I don't remember that one" when the answer isn't there — is not a limitation. It is the feature.
Challenges We Faced
The multi-tenancy routing problem was harder than expected. A single WhatsApp number serves three fundamentally different users — patient, family, doctor — each of whom should see and contribute different information. Getting the role detection and data isolation right without any cross-contamination required more careful schema design than we initially scoped.
Memory compaction without losing small details is an unsolved tension. Traditional summarization discards specifics. For Alzheimer's patients, the specifics are the point — Lucia is 6 and loves painting, not just "granddaughter." We wrote a custom compaction prompt that explicitly instructs Claude to preserve atomic facts and named entities, and tested it aggressively against detail-loss failure cases.
Scope discipline under time pressure was the hardest challenge of all. We had to cut the analytics module, the periodic location polling, and the onboarding video flow. Cutting things that matter — and being honest about what the MVP actually proves — is a skill. The investor on the team enforced it ruthlessly.
What shipped: a working memory loop, morning briefs, multi-role WhatsApp routing, and geofence exit alerts. Enough to prove the thesis. Everything else is Phase 2.
Built With
- api
- claude
Log in or sign up for Devpost to join the conversation.