Inspiration

It started with a phone call.

During the early stages of this project, our team called 988 — the Suicide & Crisis Lifeline — just to understand the experience firsthand. We expected to speak with someone. Instead, we were put on hold.

Sitting there, listening to that music, we felt something shift. If we — a group of CS students doing research — felt unsettled by that silence, what does someone in a genuine mental health crisis fell in that same moment?

We looked up the numbers. From 2022 to 2024, over 10 million calls were made to 988. On high-traffic days, thousands of callers waited on hold — some for 10 to 15 minutes or more. Some hung up. Some never called back.

988 exists because every life is worth a phone call. We built CrisisLine AI because every second of that phone call matters.

What it does

CrisisLine AI is an AI-powered crisis support platform that closes the gap between when someone reaches out and when a human counselor is available — across both voice and text.

For callers in crisis: When 988 lines are overwhelmed, CrisisLine AI answers immediately. Callers can choose how they want to be heard — by voice or by chat. Our AI voice agent picks up the phone and has a real, warm, empathetic conversation using natural speech. Our web chat interface gives those who prefer to type a parallel path with the same emotional intelligence. No hold music. No silence. Someone — something — is always there.

For counselors: While the AI is on the line, it's simultaneously analyzing the conversation in real time — detecting emotional state, risk level, immediate needs, and any location details. The moment a human counselor becomes available, they don't start from zero. They open our counselor dashboard and instantly see a structured case card: severity score, conversation summary, what the person needs right now. A briefing that would take 10 minutes of reading — delivered in 5 seconds.

How we built it

Backend & AI Layer — FastAPI serves as our core backend, handling routing, session management, and business logic. We used LangChain to maintain per-caller conversation memory across sessions, enabling contextual, continuous dialogue. GPT-4o powers the mental health AI agent, guided by a carefully engineered system prompt built around real crisis counseling principles — 1-2 sentence responses, emotional mirroring, never diagnosing, always anchoring.

Voice Pipeline — Phone calls are routed through Twilio's media stream via WebSocket. Transcribed turns are piped into the LLM, and responses are synthesized back through Text To Speech from Google Cloud— creating a fully real-time voice conversation loop.

Challenges we ran into

Real-time audio was brutal. Twilio streams µ-law audio at 8kHz in 20ms chunks over WebSocket. Getting AssemblyAI's streaming v3 SDK to consume those chunks cleanly - buffering at 100ms, handling end-of-turn detection, and firing LLM calls only on complete utterances — required deep threading and async coordination between two separate event loops. A lot of audio was silently dropped before we got it right.

The system prompt is a product decision. We rewrote the AI agent's system prompt over a dozen times. Too clinical and it felt robotic. Too warm, and it felt manipulative. We studied real crisis counseling frameworks — motivational interviewing, active listening, safety assessment — to find the exact tone that felt human without overpromising. That prompt is arguably the most important line of code in the project.

Latency across the voice pipeline. Speech to Text → LLM → Text to Speech introduces compounding latency. For a crisis caller, a 4-second pause feels like abandonment. We optimized by streaming LLM output in chunks and beginning TTS synthesis before the full response was generated, significantly reducing perceived response time.

Accomplishments that we're proud of

  • Built a fully functional real-time AI voice agent that can hold an empathetic crisis support conversation over an actual phone call — end to end, in a hackathon timeframe.
  • Built a dual-channel platform (voice + chat) that routes into a single unified real-time dashboard — something that doesn't exist in the current 988 infrastructure.

What we learned

We learned that the hardest part of building for mental health is not the technology - it's the responsibility. Every design decision carries weight. The wrong word in a system prompt, the wrong pause in a voice response, the wrong severity score on a case card — all of it has real consequences for real people.

What's next for CrisisLine AI

Counselor co-pilot mode. Rather than just a pre-call briefing, we want the AI to sit alongside the counselor during the live call — surfacing real-time suggestions, flagging escalation signals, and prompting follow-up questions based on the conversation.

988 partnership and compliance. We want to work directly with crisis centers to deploy CrisisLine AI as a real overflow layer — with full HIPAA compliance, data anonymization, and integration into existing counselor workflows.

Multilingual expansion. AssemblyAI's multilingual streaming detection is already in our pipeline. The next step is ensuring our LLM agent responds fluently in the caller's detected language — because a crisis doesn't wait for a translator.

** The infrastructure exists. The gap is real. And no one should have to face that silence alone **

Built With

Share this project:

Updates