Inspiration

We've both watched friends — and ourselves — hit walls we didn't see coming. Not dramatic breakdowns, but slow fades: the 2am study sessions that stop feeling productive, the group chats you stop replying to, the week where everything feels pointless and you can't explain why. The worst part isn't the burnout itself. It's that by the time you recognize it, you're already deep in it.

We started asking: what if there was something that noticed before you did? Not a therapist, not a wellness app with mood emojis — something that actually reads the signals you're already producing and shows them back to you, clearly, before it's too late. That's where Cognitive Mirror came from.

What it does

Cognitive Mirror is a mental state navigator for students. You describe how you're feeling — a few sentences, a voice note, a journal dump — and the system combines that with behavioral signals like sleep duration, calendar load, and communication patterns to build a picture of your cognitive state.

The output isn't vague. It gives you a burnout score, a breakdown across three neuroscience-backed dimensions — Exhaustion, Cynicism, and Personal Efficacy — drawn directly from the WHO ICD-11 burnout definition and the Maslach Burnout Inventory. You get specific, evidence-grounded insights like "your language patterns indicate elevated cognitive load, consistent with prefrontal cortex fatigue" and actionable suggestions tied to real research. Over time, it builds a pattern — a cognitive fingerprint — so you can see your trends, not just today's snapshot.

It doesn't diagnose. It reflects. That's the whole point.

How we built it

We built the prototype in under two hours with a tight two-person split. The stack: Next.js with the App Router, Tailwind CSS for styling, and a direct LLM API integration via a single Next.js API route. No LangChain, no heavy middleware — just a well-engineered system prompt and a structured JSON response.

The AI system prompt was the core product decision. We grounded it explicitly in the WHO ICD-11 framework, the Maslach Burnout Inventory, Pennebaker's LIWC linguistic marker research, and HPA axis stress response literature. Every insight the model generates is required to reference a specific neuroscience mechanism — cortisol dysregulation, circadian rhythm disruption, prefrontal impairment — so the output feels clinical and credible, not generic.

For the demo, behavioral signals (sleep, typing pace, calendar stress) are surfaced as editable inputs, making the multi-modal analysis feel real even without live sensor integration.

Challenges we ran into

The hardest problem was tone. There's a thin line between a tool that's usefully honest and one that's alarming or irresponsible. We went through several iterations of the system prompt before landing on framing that feels empathetic without being vague, and clinical without being cold. "Pattern recognition" rather than "diagnosis" became our north star.

Structured JSON output from LLMs is also less reliable than it looks — we had to add robust fallback handling to make sure a malformed response never surfaces as a raw error during a live demo. Time pressure made every decision feel higher stakes than it probably was.

Accomplishments that we're proud of

The scientific grounding is something we're genuinely proud of. Most mental wellness tools gesture at research — we actually built the WHO and Maslach frameworks into the model's reasoning layer. When the AI says your efficacy score is dropping, it's using the same dimensional model a clinician would use.

We're also proud of the framing. "We're not a therapist, we're a mirror" is a design philosophy, not just a disclaimer — and it shows up in every copy decision, every insight string, every suggestion card.

What we learned

Prompt engineering is product design. The system prompt isn't a backend detail — it is the product. Getting the tone, structure, and scientific grounding right took as much thought as any UI decision we made.

We also learned that for a demo, perceived authenticity matters more than technical completeness. Hardcoded signals that feel plausible beat half-built real integrations every time.

What's next for Cognitive Mirror

Real signal integration is the obvious next step — pulling from calendar APIs, wearables, and typing analytics to make the behavioral layer genuinely automatic rather than self-reported.

Longer term, we want to build the longitudinal layer properly: a true cognitive fingerprint that tracks patterns across weeks and semesters, flags anomalies early, and helps students understand their own rhythms — not just react to crises. We'd also explore a version designed for universities, giving counseling teams an anonymized, aggregate view of student cognitive load before it becomes a caseload problem.

Built With

Share this project:

Updates