Inspiration

In the U.S., mental health support often exists-but not when people need it most. More than 59 million U.S. adults live with a mental illness, yet estimates suggest only about half receive treatment. In parallel, access is structurally limited: over 169 million Americans live in federally designated Mental Health Professional Shortage Areas, where finding a therapist can mean long waits or no availability at all, and can be very costly. In moments of anxiety or panic, people often can’t type, explain, or even navigate an app. We wanted to design for that exact moment-when support is needed immediately, but access and friction get in the way.

What it does

MindMemos is a gesture- and voice-first mental health support platform built around real lived experiences. Users can do guided journaling, browse a feed of recovery stories, and chat with people who have faced similar struggles. If someone doesn’t even know the name of what they’re experiencing, they can describe or search their symptoms in the AI chatbot (for example: “racing heart,” “can’t breathe,” “dizzy”), and the system recommends peers who have already gone through and recovered from similar situations so they can connect and talk. If a peer is consistently helpful, the user can add them as an emergency contact to reach out to during future moments of distress. Search results are ranked by trust: supporters earn +10 XP every time someone marks them as helpful, so the most helpful people show up at the top for each topic or symptom. When using a phone isn’t feasible, the smartwatch provides one-gesture access to Talk-to-AI for calming guidance, with optimized API calls and lightweight flows to keep response time and latency as low as possible across both the web and watch.

How we built it

MindMemos runs on Google Cloud: a single GCE VM behind Caddy, serving an Angular 20 frontend and a Node.js/Express backend. The backend runs in Docker and uses MongoDB (Mongoose) for users, posts, comments, DMs, emergency contacts, panic incidents, and walkie-talkie messages. AI is powered by Gemini 3 Pro via the v1beta REST generateContent API, configured through GEMINI_API_KEY and GEMINI_MODEL. /api/ai/chat and /api/ai/panic-chat ground prompts in the user’s recent posts (up to 10) and relevant community posts from MongoDB. Speech support uses Google Cloud for STT (/api/ai/speech/transcribe) and TTS (/api/ai/tts). Real-time features use a custom Voice Gateway WebSocket and Socket.IO for DMs and voice chat, with low-latency settings (compression disabled, ping/pong keepalive, jitter buffer, and reconnection with exponential backoff). Walkie-talkie audio is converted to M4A for playback on our watch and other clients, and wearable apps can call the same REST and WebSocket endpoints.

Challenges we ran into

Reducing latency was one of our toughest problems-both for real-time user-to-user communication on the website and for delivering fast responses on our watch, where delays can break trust in high-stress moments. We had to optimize the full pipeline (API calls, real-time sockets, audio handling, and payload sizes) so voice, messages, and AI responses feel instant instead of “loading.” Another major challenge was shaping the AI experience: we didn’t want robotic or overly formal replies, so we iterated on prompts and response formatting to make Talk-to-AI feel calm, friendly, and genuinely understanding while still staying safe and consistent. Audio brought its own obstacles because browser audio capture and streaming won’t work reliably without HTTPS, so enabling secure audio transfer-both to the AI and between users-required setting up SSL certificates and running the entire system over HTTPS. Building for our watch also forced extreme simplicity: the experience had to launch with minimal effort (one hand gesture), and creating a widget flow that opens the right support mode quickly was a challenging UX + integration task. Finally, keeping the AI “grounded” in the latest community feed and a user’s recent posts was non-trivial-making sure new posts sync smoothly into the panic chat context without delays or inconsistencies required careful database querying and prompt construction so the AI stays up to date in real time.

Accomplishments that we're proud of

We’re proud that MindMemos doesn’t treat support as “just an AI chatbot,” but as a peer-driven system built around real lived experiences. We shipped a full end-to-end stack on Google Cloud (GCE + Docker + Caddy) with an Angular frontend, a Node.js/Express API, and MongoDB models covering journaling, feed, DMs, emergency contacts, panic incidents, and walkie-talkie. We integrated Gemini 3 Pro with grounded prompting that pulls a user’s recent posts and relevant community stories so responses feel calm, human, and understanding instead of generic or robotic. We also built low-latency real-time communication (Voice Gateway WebSocket + Socket.IO) and enabled secure audio over HTTPS so both AI-to-user and user-to-user support stays fast and reliable.

Most importantly, we’re proud of the watch-first gesture experience: users can access the core support flow with a single hand gesture-no tapping, no typing-making it possible to reach help even when someone is overwhelmed or can’t use their phone. Being able to trigger Talk-to-AI, connect to peer support, and use key features through gesture control turns the watch into an always-available “panic button” for support, which is exactly the kind of low-friction access we set out to build.

What we learned

We learned that mental health support must be designed for the hardest moment-when someone is overwhelmed, even a few extra steps can make them give up, which is why gesture-first access matters. We also learned that latency is not just an engineering metric; for real-time peer support, voice, and AI help, delays reduce trust, so we had to treat speed and reliability as core features. On the technical side, we learned how important secure infrastructure is for audio: HTTPS/SSL isn’t optional if you want reliable microphone access and streaming. Deploying on Google Cloud taught us why Docker and reverse proxies matter in practice-repeatable deployments, consistent environments, and smoother rollbacks/debugging. Finally, we learned that AI quality is heavily shaped by context and tone: grounding responses in recent user and community posts prevents generic replies, and careful prompt shaping makes the assistant feel calm, friendly, and human instead of robotic.

What's next for MindMemos: Gesture & Voice-First Mental Health Journal

Next, we’ll turn MindMemos into a true cross-device support layer-bringing the same low-friction experience to iOS, Android, and more wearables so help is always one gesture away, no matter what someone is using. We plan to introduce smarter, safer matching that connects users to the right recovered peers faster, along with stronger verification and moderation so trust scales with the community. On the product side, we’ll add deeper personal insights (patterns over time, triggers, what helps) and multilingual support so MindMemos works for more people globally. From a business perspective, we’ll follow a freemium model: core journaling and peer support stays free, while premium unlocks advanced insights, higher-quality personalization, unlimited voice sessions, and priority access to top supporters-plus longer-term partnerships with universities and wellness programs to expand access responsibly.

Built With

  • angular.js
  • avfoundation
  • avfoundation-(audio)
  • caddy
  • caddy-(https/ssl)-real-time:-websocket-(ws)
  • docker
  • express-database:-mongodb-(mongoose)-apis/ai:-google-gemini-(v1beta-rest-/-gemini-3-pro)
  • express.js
  • gce
  • google-artifact-registry-infra:-docker
  • google-cloud-speech-to-text
  • google-cloud-text-to-speech-cloud/hosting:-google-compute-engine-(gce)
  • google-gemini-api
  • google-stt
  • google-tts
  • javascript
  • mongodb
  • node.js
  • socket.io
  • socket.io-watch-client:-swiftui
  • swift
  • swift-frontend:-angular-20-backend:-node.js
  • swiftui
  • typescript
  • watchkit
  • watchkit-(watchos)
  • websockets
Share this project:

Updates