Inspiration
SAFE was inspired by the pressure mental health counselors face during difficult calls. In those moments, they have to listen, assess risk, choose careful language, think about next steps, and document everything afterward. We wanted to build something that supports the counselor without replacing them. SAFE is designed to reduce cognitive load and help counselors stay calm, grounded, and prepared while they support someone else.
What it does
SAFE is a real-time clinical guidance tool for mental health counselors. A counselor enters or dictates what is happening on a call, and SAFE generates a risk level, a recommended response, the technique behind that response, and suggested next steps.
The recommended response is written as something the counselor can say directly to the caller. Follow-up guidance is written to the counselor, referring to the caller as “they” or “the caller,” so it is always clear that SAFE is assisting the counselor, not replacing them. SAFE can also answer custom follow-up questions, generate session notes, and read suggested responses aloud using text-to-speech.
How we built it
We built the frontend with React, Vite, TypeScript, and React Router. The landing page lives at /, and the SAFE counselor workspace lives at /chat.
The backend is built with FastAPI. It exposes endpoints for guidance, follow-up answers, session summaries, voice output, and health checks.
For the AI layer, we use Groq to generate structured counselor guidance. We also built a RAG pipeline, which stands for Retrieval-Augmented Generation. The backend loads crisis-support and trauma-informed care PDFs, splits them into chunks, embeds them, and searches for relevant context when a counselor submits a situation. That retrieved context is passed into the AI prompt so responses are more grounded.
We also use Tavily to add live web context when needed, and ElevenLabs to turn counselor-ready responses into spoken audio. This lets counselors hear the suggested language before using it.
Challenges we ran into
One major challenge was getting the AI responses to speak to the right person. Early versions sometimes sounded like SAFE was talking directly to the caller, which was not the goal. We refined the prompts so SAFE clearly talks to the counselor and refers to the caller with they/them language.
Another challenge was making the backend reliable. The RAG pipeline needs to load documents and build an index before guidance can be generated. We had to make sure the app handled startup, API errors, and missing services clearly.
Voice generation also created some issues around ElevenLabs voice IDs, plan access, and fallback behavior. We added browser speech synthesis as a backup so the app still works even if a selected ElevenLabs voice is unavailable.
We also had deployment challenges, especially around React Router on Vercel. Directly opening /chat caused a not found error until we added a Vercel rewrite configuration.
Accomplishments that we're proud of
We are proud that SAFE feels like a real tool instead of just a chatbot. It has a clear user: the counselor. It gives structured guidance, risk assessment, follow-up prompts, session notes, and voice output in one workflow.
We are also proud of the RAG pipeline because it gives the system a stronger foundation than a normal AI prompt. SAFE can use relevant crisis-support material instead of relying only on the model’s general knowledge.
Another accomplishment is the product design. The landing page and chat workspace create a serious, calm, and professional experience that matches the purpose of the app. SAFE is meant for high-pressure moments, so the interface needed to feel focused and trustworthy.
What we learned
We learned that AI tools in mental health need very careful boundaries. It is not enough for the model to produce a good-sounding answer. The app has to make clear who the AI is talking to, what role it plays, and what decisions remain human decisions.
We also learned how important prompt design is. Small wording changes can completely change whether the answer feels useful, awkward, or unsafe.
Technically, we learned a lot about connecting a React frontend to a FastAPI backend, building a RAG pipeline over PDFs, working with Groq, using Tavily for live context, and integrating ElevenLabs for voice output.
What's next for SAFE
Next, we want to improve the counselor workflow even more. We want SAFE’s follow-up prompts to be more consistent, more clinically useful, and easier to act on. We also want to make the RAG pipeline faster and better optimized for deployment.
We would like to add clearer source visibility so counselors can see when guidance is based on document context or live web context. We also want to improve session note formatting, add stronger safety escalation flows, and support more counselor settings beyond crisis calls.
Long term, SAFE could become a broader support platform for trained helpers: crisis counselors, peer support workers, school counselors, and healthcare teams. The goal is not to automate care. The goal is to help the people providing care feel less alone and more prepared.
Built With
- elevenlabs
- fastapi
- python
- react.js
- tavil
- typescript
Log in or sign up for Devpost to join the conversation.