Inspiration

As music lovers, we all have trouble choosing which genre of music we would like to listen to. And we have one more things in common: we like to selfie! Instead of spending tons of time trying the tracks recommended by music software, why not listen to a song that fits your mood? I mean, fits your selfie.

What it does

Hymnia is a music social iOS software that allows users to take selfies and enjoy music anytime, anywhere. By analyzing user's facial expression, Hymnia guesses the current mood of user with Microsoft cognitive services API and gives back the track that best fits user's mood right now. Users can take selfies whenever they want even when there is music being played by the app and the app will start to stream high quality music instantly.

How I built it

The user and camera interface are built with react-native frame work. We use react-native to build a central control button that are responsible for taking pictures, controlling music and exploring the mood and music of others. After users take selfies, the pictures will be compressed and sent to Microsoft cognitive service API to get back the mood of users.

Then, we utilize Spotify API to search for potentially best fitting song for user's current feeling and then extract track name, artist and album image. By calling youtube API to extract the video ID that corresponds to the song Spotify recommends, we send that ID to our server which would then stream high quality music back to the mobile app.

The server side is build with Flask and MongoDB, and deployed on Linode. It keeps selfies and music of you and your friends and will return the mood by sending selfie to Microsoft API. It also implements the most exciting feature live music streaming with the help of Python requests module.

Challenges I ran into

Because the main user interface is a control button that achieves 3 functionalities, it is hard to implement with react-native because we have to change the callback of button based on current state of the app and even the hand gesture of users. Also, Spotify APIs are relatively difficult to work with because its recommendation system requires too detailed information and criteria so we have to extra and manipulate those data by sophisticated designing functions for API calls to Spotify. Another tough issue is music live streaming from server. Since music are usually large in size and we do not want users to experience interruption while they are listening, we implemented a streaming proxy which is difficult.

Accomplishments that I'm proud of

The music that the app recommends to users is relatively accurate according to user's current mood. The button control interface is beautiful, clear and has powerful functionalities that controls the workflow the the app. We achieve music live streaming so that the user can enjoy music without interruption.

What I learned

We as a team all learned a lot from this project. We become more familiarize with react-native frame work, how to use camera and user interface design. Also, we get more practical experiences working and manipulating with APIs. Live streaming is a new topic for everyone in the group and we are glad that we learn how it works and eventually implements it.

What's next for Hymnia

We want the accuracy of music recommendation of Hymnia to be as high as possible so that users would enjoy their music worlds. In the future, Hymnia can recommend music not only just based on selfies but also with user's geolocations, current time and other fueatures. We also hope to improve the Hymnia's functionality as a social software in that users can connects to their friends, see every friends' selfies, moods and music they are listening to.

Built With

Share this project:
×

Updates