This is the starting page where you speak your emotions.
Based on your speech it suggests you songs from Spotify
To develop an enhanced music player that can understand a user's mood and generate the music playlist
What it does
The system outputs a curated playlist of songs depending upon the user's emotion/mood by judging the voice input
How we built it
The system has three development modules: UI module, IBM Watson NLP API and Spotify API. The task was divided amongst the team members and collaborated accordingly.
Challenges we ran into
- Integration of the three modules.
- Retrieving data from Spotify API
- Designing the UI
Accomplishments that we're proud of
The successfully generates the desired playlist of songs according to the user's voice input.
What we learned
- Agile development
- UI designing and implementation
- Using Watson NLP and Spotify API
What's next for aura-player
- Integration with Google home and Amazon Alexa
- Custom playlist creation
- Making the system more intelligent by learning user's past experience