Inspiration

The theremin, an electronic musical instrument invented in the 1920s, is played without physical contact—pitch and volume are controlled by moving hands near two antennas. Its eerie sound has been featured in various music genres and movie soundtracks.

What it does

Mood Theremin translates your facial expressions into music - no musical theory needed! Smile, frown, or be surprised – your emotions become beautiful chords and melodies in real time. Explore your feelings and get inspired through music!

How we built it

Our project integrates facial expression analysis, emotion sentiment analysis, machine learning, and audio synthesis to generate music based on users' facial expressions. Through video input, we captured user's facial expressions, analyzed emotional sentiments using the HUME streaming API, and fed this data into a Scikit-Learn K-Nearest Neighbors (KNN) regressor model. We trained the model real-time and it translates facial expressions into frequencies for a 7th chord and a corresponding volume level based on the intensity levels of users. Then, the chords are played using the Web Audio API, creating a unique melody that mirrors user's emotions.

Challenges we ran into

  • Integrating different components seamlessly, ensuring smooth data flow.
  • Fine-tuning the machine learning model to accurately capture emotional nuances.

Accomplishments that we're proud of

  • Getting this to work from a concept in 24 hours!
  • Integrating multiple elements, emotion analysis, machine learning models, and music generation.
  • Generating an immersive 3D user interface for a rich interactive experience.

What we learned

  • Enhanced our respective skillsets in integrating diverse technologies into a cohesive project.
  • Deepened understanding of facial expression analysis and emotion sentiment interpretation.

What's next for Mood Theremin

  1. Multi-user Support: Adapting the model for multiple users, considering complex emotional interactions.
  2. Emotion Sentiment Refinement: Fine-tuning emotion sentiment analysis for greater accuracy.
  3. Diversifying Audio Output: Add different classes of instruments (piano, string ensemble, etc.)
  4. User Experience Improvements: Enhancing the interface and feedback for a seamless user experience.
  5. Customization options: Offer the user options to customize certain chords corresponding to certain emotions.
Share this project:

Updates