I was playing with the Kozmo robot and it was fun the way it was making noise. I knew there was an API (IBM Watson) out there that could take text and give you a sentiment analysis about the text, so I thought that it would be fun to have a skill that listened to what the user said, sent the text of what was listened to (literal) and then played the corresponding emotion as per the API.
What it does
It lets the user say whatever they want to say, assesses the main emotion of what they said, and plays back a robot sound (like R2D2) that expresser that emotion (anger, joy, fear, etc.).
How I built it
I collected and created a bunch of audio files that express the main emotions that the API returns, called the API, and then had the file corresponding to that emotion play. I also created a word cloud from what the users tell Motto Botto to show what is on the mind of the people talking to Motto Botto.
Challenges I ran into
Getting the robot to express the emotions.
Accomplishments that I'm proud of
Having thought of the idea in the first place.
What I learned
I learned how to ping an external API and how to use the literal slot.
What's next for Motto Botto
A lot more variations on what how Motto expresses its emotions.