Inspiration
The average wait to see a therapist in the US is 48 days. Most people give up before the first appointment. The ones who do show up spend the first two sessions retelling their story to a stranger who is taking notes instead of listening, and then often discover the therapist isn't actually the right fit for what they're going through.
We kept asking the same question: why is the hardest part of mental healthcare the part that happens before any care begins? Intake is broken, matching is guesswork, and the first conversation is doing double duty as paperwork. We wanted to build the front door we wished existed.
What it does
Anamnesis is a mental health intake and matching platform that replaces the clipboard with a conversation.
A patient signs up and joins a video call with an empathetic AI interviewer. The AI conducts a structured but human-feeling intake, administers validated screeners (PHQ-9, GAD-7) inside the conversation, and gently surfaces risk factors. While they talk, the patient's webcam is analyzed locally for non-verbal signals like gaze stability, head motion, and blink rate, the kind of observations a clinician would jot down under "Mental Status."
When the call ends, three things happen automatically:
A clinically structured brief is generated in the SOAP format therapists already use, complete with chief complaint, history, observed behavior, screener scores, and risk flags. A six-stage matching pipeline ranks every therapist in the network for that specific patient, weighing diagnosis-specific outcome data, modality fit, severity match, language, and capacity. The patient sees their top three matches with a plain-language explanation of why each one was chosen. The therapist they pick logs in to a portal where the brief is already waiting. They walk into session one already knowing the patient's story.
How we built it
Anamnesis is a TypeScript monorepo with three pieces: an Express API, a React and Vite web app, and an iOS companion. Authentication is handled by Clerk, with a one-click "sign in as" flow for every seeded demo persona so judges can experience both sides of the platform without creating accounts.
The intake conversation is powered by Tavus for the AI video persona, with the transcript streamed back to our server in real time. Visual signal extraction runs through a MediaPipe FaceMesh pipeline that processes the patient's local webcam recording. The clinical brief is synthesized by GPT-5.2 using a prompt grounded in actual SOAP note conventions.
The matching engine is the part we are most technically proud of. It is a six-stage agentic pipeline:
Plan: a small model decides which downstream stages are worth running given what data we have.
Infer profile: a clinical-psychologist-prompted call extracts a structured diagnosis, severity, and risk flags, weighing screener scores explicitly.
Retrieve: deterministic filtering on license region, language, and capacity.
Score: a fully deterministic, Bayesian-shrunk weighted score across outcome history, specialty match, modality fit, and severity fit. We shrink raw success rates toward the population mean so a therapist with 4 cases cannot outrank one with 200.
Critique: a senior-supervisor-prompted call reviews the top five and can reorder, annotate, or veto matches the heuristic missed.
Synthesize: warm patient-facing explanations, grounded only in actual feature scores.
Every stage is wrapped individually so any single failure degrades gracefully instead of breaking the match. The full reasoning trace is persisted on the session, so any recommendation we surface is fully auditable.
The design system (Cormorant Garamond and Inter, copper-brown on cream) was built pixel by pixel against a reference comp because we wanted something that felt like a quiet office, not a clinical SaaS dashboard.
Challenges we ran into
The biggest one was a silent platform limit. Our AI video provider's recording feature is gated by account tier, and on ours the API accepted our "enable recording" flag and then quietly never produced a recording. We only caught it by directly probing their API after spending hours trying to debug "missing webhook" issues that did not exist. The fix was to record the patient's webcam in the browser in parallel with the AI call, upload the blob ourselves, and run computer vision on it server-side. Two cameras sharing one device, a 200MB raw upload route, and a worker that accepts either a remote URL or a local file. It works, and it is honestly a better architecture than depending on the vendor.
The second challenge was making the AI matching defensible. We did not want a black box that just says "here is your therapist, trust us." That is dangerous in healthcare. So we forced the language model out of the actual ranking decision. It reads the transcript and writes the explanations, but the ranking is deterministic math we can show our work on.
Accomplishments that we're proud of
A real, end-to-end working product. Patient signs up, talks to AI, gets matched, and the therapist they choose has a brief waiting for them. Not a demo path. The whole loop.
Bayesian-shrunk outcome scoring that prevents low-volume therapists from gaming the rankings.
A graceful degradation story for every external dependency. If the AI call fails, the matcher still runs. If a stage of the matcher fails, the others compensate. Nothing in the platform silently fails on a real patient.
The visual signal pipeline. Real computer vision, on real recordings, no faked data anywhere.
A design that does not look like every other healthcare app.
What we learned
Healthcare problems are not technology problems wearing a stethoscope. The hardest part of building this was not the prompting or the video integration, it was deciding what a clinician would actually want to read in a brief, what a patient would actually trust in a recommendation, and where AI absolutely should not be making the decision. Every shortcut we tried to take got walked back once we asked, "would we want this for someone in our family?"
We also learned that vendor APIs lie by omission. Trust the product, verify the platform.
What's next for Anamesis-AI
A pilot with a real group practice. Everything we have built is shaped to slot into how clinicians already work. The next milestone is putting it in front of one.
Outcomes tracking. Once a patient is in care, periodic re-screeners feed back into the matching engine so the model learns from real outcomes and not just self-reported fit.
Insurance and payor integration. The match should respect what the patient can actually afford and access.
A clinician-facing version of the brief that supports follow-up notes, so Anamnesis becomes the throughline across the whole therapeutic relationship, not just the front door.
Built With
- clerk
- cormorant-garamond
- drizzle-orm
- express.js
- ffmpeg
- getusermedia
- gpt-4o-mini
- inter
- ios
- javascript
- mediapipe-facemesh
- mediarecorder-api
- neon
- node.js
- openai-gpt-5.2
- pino
- pnpm
- postgresql
- python
- react
- react-query
- react-router
- replit
- swift
- swiftui
- tailwind-css
- tavus
- typescript
- vite
- webrtc
- zod
Log in or sign up for Devpost to join the conversation.