Inspiration

SafeTalk was born from a simple observation — it’s often easier to type what we feel than to say it out loud. Many students, especially those far from home, struggle to find a place where they can safely share what’s on their mind without judgment or labels. As international students ourselves, we noticed how conversations around mental health often start too late — after burnout, after isolation, after a breaking point.

We wanted to create something that could meet people early, in a warm, approachable way — blending the compassion of a friend with the reflection prompts of a counselor. That’s how SafeTalk came to life — an AI-supported web space where users can type, reflect, and receive gentle guidance or grounding exercises.

What it does

Working on SafeTalk taught us that designing for mental health means designing for emotion. We explored concepts in affective computing and responsible AI communication, understanding how tone, pacing, and visual design affect comfort and trust.

From a technical side, we learned:

How to build a React + Vite frontend with smooth Framer Motion animations for a calming user experience.

How to set up a lightweight Express.js backend for secure prompt handling and API communication.

How to create prompt templates for empathetic AI responses — balancing warmth, neutrality, and safety.

Mathematically, we even explored sentiment weighting functions such as:

𝑤 ( 𝑠

)

1 1 + 𝑒 − 𝑘 ( 𝑠 − 𝑠 0 ) w(s)= 1+e −k(s−s 0 ​

) 1 ​

where 𝑠 s represents sentiment confidence and 𝑘 k adjusts sensitivity — helping us fine-tune the tone of AI feedback.

How we built it

Built an interactive prompt card with starter questions and an anonymous toggle.

Used Framer Motion to animate floating “orbs” — subtle, breathing-like motion to reduce visual anxiety.

Backend:

Node.js + Express REST API that routes user input to a lightweight AI model or LLM API.

Implemented keyword detection for emergency terms (e.g., “suicide,” “hurt,” “hopeless”) triggering safe resource redirects.

Stored anonymized conversation snippets for reflection (no personally identifiable info).

AI Integration:

Created modular prompt templates like "Reflect gently and offer grounding suggestions if stress is detected."

Response tone and length tuned through parameter weighting and content filters.

Deployment:

Deployed frontend on Vercel and backend on Render, using CI/CD from GitHub.

Configured CORS and environment variables for secure communication between both ends.

Challenges we ran into

one calibration: Training or prompting an LLM to sound empathetic without crossing into therapeutic claims was a key challenge. We iterated dozens of times to find a balance between warmth and responsibility.

Data privacy & safety: Since this involves sensitive input, we implemented strict anonymization, no persistent logs, and clear disclaimers. Our biggest challenge was keeping the platform human and safe without feeling clinical.

Emotion through design: Conveying empathy through color, typography, and motion required experimentation — too vibrant felt overstimulating, too muted felt lifeless. Eventually, the pastel blue and purple gradient hit the sweet spot for comfort.

Technical syncing: Managing backend–frontend timing for real-time responses and animation refresh without lag or API overcalls required caching and async queueing logic.

Built With

Share this project:

Updates