Music is a powerful bond that connects people to their memories.

Whether it was a classic you always played while driving down PCH or an absolute banger you heard at a dance, we are all connected through individual and shared memories by music. Our new app, Bard, amplifies the effect of music on memories and assists users in remembering their trips and vacations. When travelling with Bard, your music will contribute to your story and contribute to its recollection long after.

Applying advances in cognitive and computer sciences allows Bard to take full advantage of memory supporting research and convolutional/ recurrent neural networks. By employing Facebook and Yelp APIs to provide meaningful and shared insights in destinations, a user is benefited from cutting-edge NLP and CNN developments to create playlists which contribute to their present enjoyment and future recollection.

How we built Bard

Initially, our group accustomed ourselves with planning and debating over which features our app should contain. We decided we should address the long fought battle of who controls the AUX. To bring an objective perspective to music selection of a trip, we needed to develop an app which would benefit the user using AI, server-side optimization, and ease-of-use. After this, we began constructing a central Django server to serve as an intermediary between our user and music algorithms. The music algorithms were developed independently and relied on Facebook, Google Cloud, Yelp and Spotify APIs to derive the optimal playlist for any trip.

By using Facebook's like history and Yelp's reviews and images, Google Cloud Vision and Language processing APIs worked together to bring the user the musical trip. When the user enters their destination into the app, we request Yelp reviews and images on the location. Google's sentiment analysis of the reviews poses an important role in the understanding of their positive or negative natures. Google's Vision API was then used to determine key features of the Yelp images. The Language API derived the vector-space similarity between these key-words and available music genres in Spotify. We then queued the top 2 most similar genres, enjoyable and location-dependent artists, and vibe of the songs from Spotify to create a perfect playlist for any trip.


Some of the challenges our team ran into were from porting our Django local server to Google's Cloud services. This ultimately led us to pursue an approach centered around Google's Firebase along with many other Google Cloud functionalities. Moving from this project's server-side, the machine learning part of the team faced difficulty with Spotify's API. Being extremely limited to data collection hindered progress as creative solutions were required to make this app functional. One functionality we needed to include, but was difficult, was music genre being derived from Yelp images.

By using a unique combination of Google Vision and Language APIs,

we surmounted a difficult obstacle in a fun and engaging way. We also quickly connected the 3 parts of our app using Google's Firebase server which allowed us to spend the rest of our 36 hours refining our algorithms, user interface, and overall functionality.

We learned

a great deal from reading documentation about APIs, but more importantly, we had the unique experience in adopting a member into our team on the day of the Hackathon. By meeting and beginning to share our skills and knowledge, our team was able to accomplish our goals by approaching different parts of the project development which best suited our skillsets. Because of our new team member and friend, we learned that everyone can offer something new to a project, even if the initial plan didn't account for their presence. Shout out to Kat :)

Share this project: