We were inspired by our common experience with listening to Beatles music in the electronics lab, and sought to enhance our and others' listening experience.

What it does

The MoodSynth system continuously senses lighting and the overall intensity of each user's smile captured in a camera. MoodSynth then adjusts the Spotify web player based on these parameters. Specifically, the brightness of the area determines the volume of the music being played (darker means more quiet music), and the intensity of the smile determines the song (there are currently two options, one for low intensity and one for high intensity).

How we built it

The first component of this project was in C++. This was where we used OpenCV to implement the image analysis. We used the Spotify Web Player SDK (JavaScript) to connect to the Spotify Web Player. We attempted to modify the C++ program to communicate to our webpage via curl, but we ran out of time and had a few issues.

Challenges we ran into

At first, we tried to implement the project with a DragonBoard 410, but there were eventually too many issues with the device. Consequently, we had to switch back to running the system on a laptop. In addition, we met initial difficulties with accessing features of the Spotify Web Player, but we eventually discovered we were using the incorrect API for the application. Lastly, the complication which we did not get to resolve is an issue with our webpage refreshing each time data is sent, thus breaking the chain in which all other parts are functional. Due to time constraints, we could not implement communication between the face detection and Spotify SDK parts of our program. Also, we did not program volume change according to brightness in the webpage, but this would be an easy change to make further on.

Accomplishments that we're proud of

Despite our numerous setbacks, most parts of our project work, and we are very happy to have almost achieved our goal (including a working real-time image analysis program and an html/JavaScript program which successfully communicates with the Spotify Web Player.)

What we learned

We gained extensive knowledge in JavaScript, and had some exposure to OpenCV and C++. It was also a learning experience in compiling many C++ libraries (this took up much of the time). Finally, we learned how to use the Spotify Web Player SDK.

What's next for MoodSynth

Fixing the C++ to html communication issue, adding greater versatility, improving sensing and categorization, and making the responses more gradual. MoodSynth is currently a proof-of-concept and needs to be further developed to become a truly enjoyable product. Development may continue over spring break.

Share this project: