We were inspired by old-school music players like jukeboxes and iPod shuffles. Sometimes, not knowing what song is playing next can be an exciting thing.

Process: ->Our machine learning model organizes the mp3 files in a user's phone based on the emotion most prominent in the song ->The user takes a picture of themself ->The app determines an emotion associated with that picture (Microsoft Cognitive API) ->The app starts playing a list of songs that have the emotion from the picture tagged to them

How we built it

A lot of googling and Christophe's debugging skills. Combining snippets of code we found online here and there helped.

Challenges we ran into

Android studio is a difficult IDE to work with. Using the audio signal processing library was really tough because it had really poor documentation and wasn't used widely at all. We also had no experience with signal processing.

Accomplishments that we're proud of

Made our own training data for a neural network.(AWS) Used a complex (basically underground) library for audio signal analysis. Have

What we learned

How to use signal processing libraries How to train a neural network Data aggregation and analysis

What's next for Senti Audio

Build our own neural network Better training to categorize songs

Some food for thought: "I want my world to be fun. No rules, no nothing. Like, no one can stop me. No one can stop me." - Justin Bieber "I'm looking forward to influencing others in a positive way. My message is you can do anything if you just put your mind to it." - Justin Bieber, Stratford, Ontario

Share this project: