Inspiration

Mental health is a growing challenge that affects people of all ages, yet it's often overlooked or stigmatized. The COVID-19 pandemic brought emotional well-being into sharper focus, showing just how widespread and urgent these struggles are.

Unfortunately, access to support like therapy remains expensive or out of reach for many. While this app isn’t meant to replace professional help, it’s designed to offer a small, meaningful step in the right direction—helping users recognize their emotions, reflect, and feel seen.

By combining facial expression detection with personalized prompts, the app becomes a gentle and accessible tool for emotional awareness. My goal is to create something not only impactful but also free of cost, because mental health care should be within reach for everyone.

What it does

When you open the app, a camera detects your face. In just a few seconds, the model detects your mood and provides meaningful support and advice.

How I built it

To build this project, I integrated face-api.js, a JavaScript library that runs directly in the browser and detects facial features and expressions in real-time using a webcam. I loaded two pre-trained models: the Tiny Face Detector to quickly and efficiently locate the face, and the Face Expression Net to classify different emotional states like happiness, sadness, anger, and more.

Once a face and emotion are confidently detected, the app sends that emotion via a POST request to a Python backend powered by FastAPI. The backend then processes the emotion and returns a relevant, supportive prompt — such as a reflective question or motivational message — which is displayed directly on screen. This real-time feedback helps the user become more aware of how they're feeling and encourages emotional reflection in a safe, judgment-free way.

To generate deeper and more human-like prompts, I integrated Ollama, an open-source large language model framework. It runs locally, allowing me to query models like LLaMA or Mistral on the backend without relying on third-party APIs or cloud-based inference. This keeps the app lightweight, private, and fully offline-capable, all while delivering intelligent, context-aware prompts tailored to the user's detected emotion.

Finally, I added a “Try Again” button that resets the emotion state and restarts the detection loop. This allows users to express different emotions and receive new prompts in a seamless, interactive experience, turning the tool into a playful yet meaningful path to self-awareness.

Challenges we ran into

Once an emotion was detected, the app needed a way to let users "start over" and try again without refreshing the page or reloading the video. To combat this, I created a try again button that clears the previous output, resets internal state flags, and restarts the detection interval cleanly.

My initial plan was to incorporate a GPT chatbot at the end of face detection, but I was running into errors that did not allow the chat to be visible and caused issues with the main facial expression detection. It is something I plan to continue to work on and implement it because I do think it will elevate the application and help users much more.

Accomplishments that we're proud of

I am fairly new to all these libraries and backend vs frontend code because I am more used to an informal style of coding, but I wanted to push myself this weekend and do something I would be proud of that has a real impact. I am proud of my learning through different sources this weekend and how I used these tools to shape my ideas.

What we learned

I learned a lot about backend coding with this project and just how powerful libraries are and how we can use them to make so many different things.

What's next for Emotion Quest

I want to integrate 3 different features: a meditation section, an AI chatbot to express emotions, and a calming puzzle section. All in hopes to calm users down and shift their focus to something else, like my app.

Built With

Share this project:

Updates