Inspiration

Most study tools measure time, not attention. We wanted to develop a study companion that goes the extra mile to support students by detecting when focus drops and responding in real time. Instead of relying on self-discipline, MindCraft helps users stay engaged by monitoring attention and guiding them back into their study flow when they drift.

What it does

MindCraft is an intelligent study companion that tracks focus using real-time eye tracking. If the user falls asleep, closes their eyes for too long, or leaves the frame, the system responds by sounding an alert to bring them back to attention. It also adapts the study environment based on the type of assignment, creating a personalized atmosphere with music and aesthetic animated visuals designed to support focus. Users can also randomize the music and visuals if they prefer a different environment. In addition, MindCraft includes a Pomodoro timer to structure study sessions and encourage sustainable productivity. Users can start and pause this timer using hand gestures detected in real time by our computer vision system. This touch-free interaction enhances accessibility for users who may have motor impairments while also minimizing disruptions to workflow. Instead of switching tabs or reaching for the mouse, users can manage their study sessions seamlessly without breaking focus.

How we built it

We built MindCraft using a full-stack web architecture that combines computer vision, AI, and adaptive media.

  1. Python + Flask for the backend server and real-time processing
  2. OpenCV and MediaPipe for face and eye tracking
  3. JavaScript, HTML, and CSS for the responsive frontend and UI/UX
  4. ElevenLabs for generating adaptive audio
  5. Dynamic background integration for visual environments The camera feed is processed through OpenCV and MediaPipe to detect facial landmarks and eye closure duration. When focus drops or the user leaves the frame, the backend triggers an alert and updates the study environment. The frontend sounds an alert to grab the user's attention.

Challenges we ran into

Our biggest challenge was troubleshooting the OpenCV and MediaPipe integration. We had difficulties with ensuring that the detection consistently and accurately responded to hand gestures to control the Pomodoro timer.

Accomplishments that we're proud of

We are proud of successfully implementing real-time focus detention in a web-based environment, designing and integrating custom artwork into the UI/UX, and developing an adaptive study experience overall.

What we learned

We learned how to use OpenCV and MediaPipe to implement real-time detection, and create music with ElevenLabs.

What's next for MindCraft

After the 36-hour timeframe, we plan to enhance computer vision to detect phone usage during study sessions, as well as facial expressions that may suggest the user is losing focus. We would like to explore the DeepFace Python framework, as it is a much more advanced facial recognition technology. In addition, we want to go deeper into exploring accessibility for marginalized communities.

Built With

Share this project:

Updates