The Story Behind MINDFUL

Inspiration

The idea for MINDFUL came from a simple observation — most mental health apps feel like they were designed by insurance companies, not people who've actually struggled. They're clinical, sterile, and ironically anxiety-inducing to navigate.

I wanted to build something that felt like your space. Not a dashboard. Not a tracker. A place that actually listens.


What It Does

MINDFUL is a private AI-powered mental wellness system that runs entirely in your browser. No accounts. No servers storing your thoughts. No third party ever sees what you write.

What the project started with:

  • Daily mood check-in modal (6 emotional states)
  • Morning and evening habit checklists
  • Vent Room with voice recording and text journaling using native browser APIs
  • Three stress-relief games — Bubble Pop, Sandfall (cellular automata physics), and Color Crush
  • Guided meditation with browser speech synthesis
  • Ambient soundscape player (nature, ocean, rain, zen)
  • Archive with a calendar view of past entries
  • Basic keyword-based emotion detection on journal entries

Refinements made in the Grand Finale:

  • localStorage persistence — entries now survive page refresh. Previously all data was lost on reload, making the app practically unusable for tracking over time
  • Real AI analysis — replaced the keyword matching system with live Groq/Llama 3 inference. The ANALYZE button in the archive now returns actual emotional insight, a positivity score, life theme detection, and a personalized suggestion
  • Live journal analysis — the Vent Room now analyzes your entry in real time as you type, debounced at 1.5 seconds, so emotional feedback appears while you're still writing
  • ARIA — AI Wellness Companion — a full conversational chat interface powered by Groq. Context-aware, mood-informed, and trained to validate before advising
  • AI Weekly Wellness Report — one click generates a full personalized report from the past 7 days: headline, wellness score, emotional arc, strengths, focus areas, three recommendations, and an affirmation
  • 7-day mood trend chart — built in pure SVG from check-in data, no external library
  • Streak counter — consecutive check-in day tracking with localStorage
  • Worry Jar — write a worry, watch it animate upward and dissolve. Grounded in CBT externalization techniques
  • Journal export — download your entire archive as a .txt file using the Blob API

How I Built It

Every AI feature — live analysis, companion chat, weekly report — is a direct fetch call from the client. The API key lives in environment variables and never touches the codebase.

For the games, I built a cellular automata engine from scratch on HTML5 Canvas. Each material (sand, water, fire, gas, oil, ice) has its own physics rules computed per frame:

$$\text{particle}_{t+1} = f(\text{particle}_t, \text{neighbors}_t, \text{behavior})$$

The sand simulation runs at 60fps across a grid of ~58,000 cells updated every animation frame using requestAnimationFrame.

The mood trend chart is pure SVG — no Recharts, no D3. Just calculated coordinates mapped from mood scores to pixel positions:

$$y = H - P - \frac{\text{score}}{100} \times (H - 2P)$$

where $H$ is the chart height and $P$ is the padding.


Challenges

The biggest challenge was making AI feel instant. LLM inference latency can break immersion — especially for live analysis while typing. The solution was debouncing at 1500ms and showing a pulsing [ ANALYZING... ] state immediately, so the UI never feels frozen.

JSON parsing from LLM responses was consistently unpredictable. The model would sometimes wrap output in markdown code fences, add preamble text, or return malformed JSON under token pressure. Every Groq call now strips markdown fences and has a try/catch that falls back gracefully rather than crashing the UI.

The Sandfall game was the hardest single component to build. A naive implementation that re-renders the entire canvas on every particle update was dropping to 8fps. The fix was decoupling the simulation grid from the render loop — updating physics in a flat typed array and only calling fillRect for cells that changed state.

localStorage serialization hit a wall with audio blobs — you can't serialize binary data to JSON. Voice recordings are kept in memory only during the session, while text entries persist indefinitely.


What I Learned

  • Groq's inference API is genuinely fast enough for near-real-time UX if you design around the latency
  • Cellular automata are surprisingly simple to implement but extremely satisfying to watch
  • Browser-native APIs (Web Speech, MediaStream, Web Audio, Canvas) can replace entire feature categories that would normally require a backend
  • The hardest part of building a wellness app isn't the features — it's making the empty states feel safe rather than hollow

Built With

  • React 18
  • Groq API (Llama 3)
  • Web Speech API
  • MediaStream Recording API
  • Web Audio API
  • HTML5 Canvas API
  • localStorage API
  • IBM Plex Mono (Google Fonts)
  • Lucide React
  • Vercel

Built With

Share this project:

Updates