The drive from the Bay Area to colleges in southern California lasts a minimum of five hours. For many students, this drive is endured at least once every few months, and often, alone. In these long hours of mundaneness, we naturally turn to music. Though music choices vary for most everyone, there’s one tendency we all have during these drives: Skip, Skip, Skip. For Spotify users and drivers, skipping and selecting tracks inevitably leads to removing one’s eyes from the road. To create a more enjoyable and safe road trip experience, we have created moodQ — a way to minimize distracted driving, a way to ensure you don’t end up listening to a slow ballad while you drive through the night, and a way to tailor your music selections based on your mood for a genuinely entertaining road trip.
What it does
MoodQ is an innovative solution to providing safe entertainment while driving. The application is connected to the user’s Spotify account and has access to the user’s saved tracks. When the app launches, the user is prompted to enter his or her mood. Songs classified by their unique audio features are then played according to the user’s response. MoodQ is able to analyze tracks in terms of beats per minute, energy, and many other factors. Every hour, the app audibly asks the user how he or she is feeling, and the user can respond with a tap sequence corresponding to different moods. The app will then again queue songs according to the user’s response. When the user indicates that he or she is tired, for example, moodQ will change its queue to include more upbeat songs, so that the user will stay awake while driving. In addition, the app is conscious of the time of day, with default settings that prevent slower, sleep-inducing songs from playing in the late night. MoodQ adds much needed safety for users while simultaneously creating an engaging driving experience through music.
Roadmaps and challenges
Initially, we wanted to use the Spotify Web API for all requests song-related. However, we realized that it could not queue songs like we wanted. Therefore, we ended up using the Spotify Web API to retrieve playlists of songs and analyze the songs, while we used the Spotify Android SDK to play and queue songs. In order to classify songs into various moods, we sampled hundreds of songs. Using the data from these songs, we were able to find threshold values for various characteristics, which played a huge role in determining their genre classification. The Google Cloud Platform was used for collected audio input from users. We loved everything that the Cloud Speech API offered, but ran into some problems when we discovered that Google Cloud Java client libraries do not currently support Android. Because we wished to continue using the Google Cloud Platform with the Android base, we continued to search for solutions, but were unable to find a real working solution that we could implement into our project.
What we learned
While making this app, there were many aspects of development we encountered for the first time. When encountered with both the Spotify Web API and the Spotify SDK, we were unsure about how to implement both of them without conflicts. However, we learned how to do this accurately. Accessing the time and working with emulators so heavily was also new to us, so we’re proud of the way we were able to implement these functions.
What's next for moodQ
Looking ahead, there is a world of possibilities for moodQ. At the top of our priorities is a neural network to make the song classification even more precise. With this, we’d be able to sample thousands of songs and narrow down the uncertainty. In addition to this, a verbal sequence for deciding the mood genre and using the skip function would be highly beneficial, as having a completely hands-free application would further minimize driving distraction.