Inspiration

I was doom‑scrolling through Instagram when I stumbled on a streamer’s feed where their eye‑tracking overlay showed exactly where they were looking—down to the pixel—so viewers couldn’t accuse them of sneaky peaks. It got me thinking: with today’s computer‑vision and gaze‑tracking advances, why limit this to preventing people from being weird online? What if marketers could instantly see where attention lands and how people feel—at scale and in real time? That flash of insight became the spark for EyeGotchu.

What it does

  • Uses live webcam or uploaded video to detect faces and track eye gaze (L2CS GazeTracking).
  • Runs an emotion‑recognition model (mini‑XCEPTION (FER2013)) to gauge reactions (happy, surprised, neutral, etc.).
  • Outputs an annotated video plus CSV of gaze coordinates and emotion timestamps for easy analysis.
  • Generates heatmaps showing where viewers focus most, and their reactions. Thus allowing marketing to better gauge how their advertisements perform.

How we built it

  • Front End: React + Expo Router for UI, & file‐input for video capture.
  • Back End: Flask API exposing /analyze endpoint, wrapping our Python pipeline.
  • AI Pipeline: L2CS gaze‑tracking for eye coordinates; mini‑XCEPTION model for FER2013 emotion detection; OpenCV for annotation and heatmap overlays.
  • Data Flow: Video → Flask upload → analyze_video() → annotated MP4 + CSV → React fetch & display.

Challenges I ran into

  • Model Compatibility: Upgrading TensorFlow/Keras versions broke our emotion‑model loading—solved by retraining and exporting in TF 2.10 HDF5 format.
  • Real‑Time Performance: Processing full‑HD frames in Python was too slow; we optimized by resizing frames to 480p and batching inference.
  • Time: This was my first solo hackathon, and I really started around 11:30 PM as I had no idea what to do and was about to give up.
  • Backend: Was unable to connect the model to the UI in time.

Accomplishments that I'm proud of

  • Seamless End‑to‑End Flow: One‑click upload in React instantly returns an annotated video and data table.
  • Data‑Driven Insights: Heatmaps clearly revealed focus hotspots on test webpages—validated against benchmark eye‑tracking hardware.
  • Robustness: Our pipeline handled noisy home‑recorded videos with varied lighting and multiple faces.
  • First Solo Hack!

What we learned

  • The trade‑offs of model resolution vs. speed in browser‑based uploads.
  • Best practices for packaging CV/ML models in a Flask microservice.
  • Perseverance!

What's next for EyeGotchu

  • Live Streaming: Integrate WebSockets to analyze webcam feed in real time.
  • UX Dashboard: Build an interactive analytics dashboard with drill‑down on individual sessions.
  • Cross‑Platform: Package as a desktop Electron app and mobile SDK for broader adoption.

Built With

Share this project:

Updates