We were inspired by our common experience with listening to Beatles music in the electronics lab, and sought to enhance our and others' listening experience.
What it does
The MoodSynth system continuously senses lighting and the overall intensity of each user's smile captured in a camera. MoodSynth then adjusts the Spotify web player based on these parameters. Specifically, the brightness of the area determines the volume of the music being played (darker means more quiet music), and the intensity of the smile determines the song (there are currently two options, one for low intensity and one for high intensity).
How we built it
Challenges we ran into
At first, we tried to implement the project with a DragonBoard 410, but there were eventually too many issues with the device. Consequently, we had to switch back to running the system on a laptop. In addition, we met initial difficulties with accessing features of the Spotify Web Player, but we eventually discovered we were using the incorrect API for the application. Lastly, the complication which we did not get to resolve is an issue with our webpage refreshing each time data is sent, thus breaking the chain in which all other parts are functional. Due to time constraints, we could not implement communication between the face detection and Spotify SDK parts of our program. Also, we did not program volume change according to brightness in the webpage, but this would be an easy change to make further on.
Accomplishments that we're proud of
What we learned
What's next for MoodSynth
Fixing the C++ to html communication issue, adding greater versatility, improving sensing and categorization, and making the responses more gradual. MoodSynth is currently a proof-of-concept and needs to be further developed to become a truly enjoyable product. Development may continue over spring break.