Inspiration
People with ADHD often struggle to regulate attention and emotion in real time, especially when switching between tasks or facing tight deadlines. Traditional tools like to-do lists, reminders, or focus timers are reactive — they require intentional effort and don’t adapt to a user’s moment-to-moment state. We wanted to create a proactive, emotion-aware assistant that could detect subtle behavioral and emotional cues and respond in a supportive, personalized way — just like a calm, observant friend who notices when you’re drifting, overwhelmed, or stuck.
What it does
EMind is a privacy-first AI assistant that passively senses signals from:
- 👁️ Facial expressions
- ⌨️ User behavior (typing, tab-switching, screen interaction)
- 🔊 Ambient sound
Using these signals, it infers one of several ADHD-relevant states: Focused, Drifting, Overwhelmed, Stuck, Hyperfocus, or Transition Fatigue.
Once a state is inferred, EMind uses an LLM to generate 1–2 sentence context-aware prompts — for example: “You seem overwhelmed. How about a short stretch break before diving back in?”
How we built it
- DeepFace → for facial emotion and attribute recognition using pre-trained CNNs (TensorFlow/Keras backend).
- OpenCV (cv2) → to handle live video capture, frame extraction, and image preprocessing.
- Pillow (PIL) → for decoding and manipulating image arrays.
- NumPy → for efficient numerical computation and vector operations.
- TensorFlow (CPU) → to run emotion detection and classification models locally.
- OpenAI API (LLM) → to convert classifier outputs into natural-language, empathetic feedback.
What we learned
- Integrate computer vision and LLM-based reasoning in real-time.
- Translate raw emotion classification into human-centered interventions.
- Design systems that respect user privacy and mental well-being.
- Apply principles from affective computing and behavioral psychology to product design.
Challenges we ran into
Cross-modal fusion — combining visual, auditory, and behavioral inputs into a unified interpretation required careful tuning.
Accomplishments that we're proud of
- Building a device that will really help people with ADHD to focus and work independently.
- Successfully used DeepFace and TensorFlow to detect subtle emotional shifts like frustration, confusion, or hyperfocus from live webcam input.
- Designed a privacy-first system that processes signals locally, ensuring sensitive emotional data never leaves the user’s device.
- Created a lightweight, human-like response system powered by an LLM that translates raw classifier output into empathetic, contextual prompts.
- Developed a Tkinter popup interface that delivers state-aware interventions (e.g., suggesting music, stretching, or short breaks) without breaking focus.
- Bridged psychology and AI, applying principles from affective computing and ADHD research to create a tool that genuinely supports emotional regulation and focus.
What's next for EMind
- Platform expansion — build a mobile and browser-based version that runs quietly in the background while users work or study.
- Collaboration with ADHD communities — partner with clinicians and support groups to validate EMind’s effectiveness in real-world environments.
Log in or sign up for Devpost to join the conversation.