Inspiration

What’s it Like is a skill to teach empathy to kids between the ages of three and eight years old. The skill aims to teach kids to accept people’s differences by having a conversation with Echo the robot.

What it does

Echo will ask the user, a kid, to help it learn empathy by teaching Echo about people and their feelings. The user will be given prompts that ask to explain to Echo some differences between the user and the user’s friends. After the user acknowledges there are some differences between them and their friends, the user will be prompted to explain how it feels to be different. Then Echo will end the section with a response about how even though we have small differences, we are all largely the same. There are sections that talk about having a unique name, different heights, ages, allergies, accents, languages, birthplace/ where a person is from, different hair, etc.

How we built it

We started out with a design document that listed all the prompts (responses) and user intents. We tested the design of the game by speaking our interaction model out loud, where one person would play the role of Echo and the other would play the role of the user. Once we iterated on the design of the game, we mapped out interaction paths in a flowchart. From the flowchart, we listed out just over 100 test cases that would tell us if the skill is working as intended. After that, we scoured the internet for free sound effects we could use in the skill. Then we started building the skill on both the Storyline and Voice Apps platforms to test out which one would fit our needs the best. We ended up choosing to submit the skill from the Voice Apps platform because there were more features available.

Challenges we ran into

We originally planned to program the skill in AWS Lambda, but that proved difficult with the first app we published, so we decided to try out some tools this time around. We started with Storyline, a platform to build Alexa voice apps. This platform was easy to work with, but there were limitations in the ability to make the skill dynamic. A goal of ours was to make the skill fun to repeatedly play, which means each time the user would need to get a different experience. We didn’t think Storyline could accomplish this so we tried out a different platform called Voice Apps. Voice Apps has more features available for use, but we ran into some challenges with their reliability. We worked with the support teams extensively from both Storyline and Voice Apps to fix bugs in the software along the way. Voice Apps ended up with the better skill overall, so we chose to publish with them.

Accomplishments that we're proud of

We are proud to have gotten this skill done in time for the competition. It took some long nights and a lot of troubleshooting to get this across the finish line.

What we learned

We got feedback from users that the instructions at the beginning of the skill were too long, so we cut down the length of text. Users seem to just want to skip right to the game. We also learned how to navigate the intricacies of creating intents. Utterances that are too dissimilar should be separated into different intents, otherwise the intent has a chance of being too broad and triggering when it should not trigger.

What's next for What's it like

As mentioned previously, we divided the skill into sections. Each time a user plays through the skill, the sections are chosen for play in random order. There are currently six sections and we choose five of the six to be played. We have more sections already planned out that just need to be programmed in, including ones on income inequality and finding it difficult to focus. In addition to the variable sections, we have words that are variable. Some nouns and verbs are created as variables that swap out randomly in sentences each time a person plays. We intend to add more of these variables.

Share this project:

Updates