Inspiration

Our inspiration stems from visualizing the vast potential of facial emotion recognition technology and its integration into everyday applications. We saw an opportunity to apply this technology to improve the driving experience and increase safety, especially for distracted or emotional drivers.

For example, imagine a mother driving with a crying child in the backseat. Her emotional state may fluctuate from stress to concern to distraction. By monitoring the driver's facial expressions, an intelligent system could detect emotions like drowsiness, anger, or distraction. It could then take appropriate actions - adapting the vehicle environment, issuing alerts, or intervening in driving functions if necessary.

The critical insight was that by detecting a driver's mood in real time, we could create a more innovative, safer, and more enjoyable driving experience. This led us to develop MoodTunes - a system that plays music intuitively based on the driver's emotional state.

What it does

MoodTunes is an AI-powered system that creates an emotionally synchronized musical environment for drivers. It detects the driver's mood in real time by analyzing their facial expressions. Based on the detected mood, MoodTunes then intuitively selects and plays appropriate music to match the driver's state of mind

How we built it

On the technical side, we leveraged TensorFlow and Keras to train a facial emotion recognition model using a dataset from Kaggle. After optimizing for accuracy, we integrated the model into a web interface built with Next.js. This allows us to apply AI predictions in real-time to enhance the driving experience.

Challenges we ran into

Some hurdles we faced included: Brainstorming the initial concept and use case Connecting the camera input to our software system Learning the nuances of machine learning and training effective models Extensive testing and iteration to improve model accuracy Long coding (24 hours) sessions are required for rapid prototyping and development

Accomplishments that we're proud of

Creating an optimized machine learning model for our specific use case Finding a suitable facial emotion dataset with relevant categories Building an intuitive, user-friendly web interface

What we learned

Next.js for building web applications Core machine learning techniques and model optimization API design and integration Team collaboration and project management

What's next for MoodTunes

We envision MoodTunes evolving beyond its current state. Future iterations could see it incorporating features like an Emotion Detector, DUI (Driving Under the Influence) Detector, and enhancing the communication bridge between AI in cars and their human occupants – both driver and passengers.

Built With

Share this project:

Updates