MirrorMind 🪞🧠
Inspiration
Many people — especially those on the autism spectrum, individuals with social anxiety, or non-native speakers — struggle with reading and producing appropriate facial expressions in social situations. Existing tools are either clinical, expensive, or inaccessible. We wanted to build something free, private, and fun that anyone could open in a browser and start practicing immediately, with no downloads or accounts required.
What it does
MirrorMind is a real-time facial expression coaching app that runs entirely in the browser. It uses your webcam to detect facial landmarks and classify expressions, then provides instant visual feedback to help you practice and improve your emotional expressiveness.
Key features include:
- Guided Onboarding & Calibration — personalizes the experience to your unique facial baseline
- Expression Training — practice specific emotions (happy, surprised, angry, etc.) with real-time scoring
- Expression Game Mode — a gamified challenge where you match prompted expressions against a timer
- Scenario Selection — choose social contexts (job interview, first date, presentation) to practice situation-appropriate expressions
- Results Dashboard — review your performance with detailed breakdowns and progress tracking
- 100% Client-Side Privacy — all face processing happens locally via ONNX Runtime in the browser; no video data ever leaves your device
How we built it
- Frontend: React + TypeScript with Vite, styled with Tailwind CSS
- Face Detection & Expression Classification: ONNX Runtime Web for in-browser ML inference — no server-side processing needed
- State Management: Zustand (
appStore.ts) for lightweight, reactive global state - Backend (minimal): Python server (
server.py) for serving the demo and optional API endpoints - Architecture: Component-driven design with dedicated pages for each stage of the coaching flow (Onboarding → Calibration → Training → Game → Results)
- Deployment: Procfile-based deployment (Heroku-ready), with a static frontend build
Challenges we ran into
- ONNX model integration in the browser — getting real-time inference performance with acceptable latency on consumer hardware required careful model selection and optimization
- Camera permissions & cross-browser quirks — handling webcam access gracefully across Chrome, Firefox, and Safari with proper error states
- Calibration accuracy — building a calibration flow that accounts for diverse faces, lighting conditions, and camera qualities without being tedious for the user
- Balancing fun and utility — making the app feel like a game while still providing genuinely useful coaching feedback
- Ethical considerations — ensuring we handle facial data responsibly with a privacy-first, on-device architecture (see
ETHICS.md)
Accomplishments that we're proud of
- Zero data leaves the browser — complete privacy by design, not just by policy
- Real-time performance — smooth expression detection and feedback at interactive frame rates
- Accessible and inclusive design — built with neurodiverse users in mind from day one
- Full coaching pipeline — from onboarding to calibration to training to gamified practice to results, all in one seamless flow
- Clean, extensible architecture — well-documented codebase ready for open-source contributions (see
CONTRIBUTING.mdandROADMAP.md)
What we learned
- How to deploy and optimize ONNX models for real-time browser inference
- The importance of calibration in making ML-powered tools work for everyone, not just the "average" face
- Ethical AI design patterns — building responsible facial analysis tools that respect user privacy and avoid bias
- How to structure a React + TypeScript app for complex, multi-stage interactive experiences
- That gamification genuinely improves engagement, even for therapeutic/coaching tools
What's next for MirrorMind
- Multi-face mode — practice expressions in simulated group conversations
- Emotion trajectory training — practice transitioning between expressions naturally (e.g., neutral → surprised → happy)
- Accessibility upgrades — audio/haptic feedback for users with visual impairments
- Therapist dashboard — optional secure sharing of anonymized progress data with care providers
- Mobile PWA — installable progressive web app for on-the-go practice
- More scenarios — community-contributed social scenarios and cultural expression norms
- Advanced analytics — AU (Action Unit) level feedback for fine-grained expression coaching
Built With
- git
- mediapipe
- npm
- react
- tailwind
- typescript
- vite
Log in or sign up for Devpost to join the conversation.