Simon Says is a fun game I like to play with my family, so I thought that maybe it would be even more fun if Alexa guided the game instead of all of us having to take turns. So, I set out to use APL and APLA as well as Alexa Conversations to build something that would be unique and different from anything else the store has to offer.
It was important to me that the game be family friendly and entertaining both visually and with audio. If a user only has a speaker, then the game creates a soundscape that is rich and vibrant with background music, pauses, and speech speed modifications. If they have a device with a screen then they get all of that and fun visuals that add a lot of humor and fun to the game.
What it does
There are dozens and dozens of commands that Alexa will string together to play Simon Says. The length of the game will be random, as will the commands themselves. It will, however, ensure that the same body part or action type isn't performed repetitively. For example, it won't do five actions involving your ears in a row.
If you have a screen it will also sync up video content along with what Alexa is saying to list out the commands visually (though it can't be used to cheat!).
How I built it
I used a database to configure the commands and Java as a backed to handle the main logic. Each user begins to game with Conversations and then is passed over to the intent model to loop through how long they would like to play. After each round it prompts the user if they would like to continue or not, and if yes it will trigger a completely new experience with different prompts.
All of the music that is played is fully licensed through Soundstripe or Soundsnap with new music constantly added to make the game feel fresh and alive. The use of dynamic content means you'll never get the same two lists of actions twice and new actions are added as I think of them (or via user suggestion!)
Challenges I ran into
I wanted to have the entire conversation work with APLA and APL in sync which took a lot of work with timing to function correctly.
Accomplishments that I'm proud of
I was able to sync up the video with the APL and APLA simultaneously, which was a huge difficulty that I'm really happy I overcame. Timing it all together and using the APL tick was really useful, and then being able to have background music and more from APLA was icing on the cake.
Another really neat thing I was able to work into the game was prosody. When you play Simon Says, people will usually say it like this: Slow: "Simon Says" pause a beat Fast: "Jump up and down"
With the intent being to trick the opponents by using their talking speed change to catch them off guard. I was able to recreate this effect by using breaks and prosody speeds to slow Alexa down, wait, and then speed her up when giving the commands. I was also able to time out the length of time for users to perform the requested action before starting the next prompt so that users didn't need to give input on each and every action.
What I learned
I learned how to build Alexa Conversation models and how to tie them into an intent model that allows the user to interact in entirely new and unique ways compared to other things I have built. I also learned a lot more about APL and APLA by using a lot of fun elements to create really cool features.
What's next for Simon Says
I plan to add more social content if people play the game, including something that would allow users to play competitively around the world and give little prompts. For example, you could play against other people not in the room with you and find a global winner.
I also intend to add in more conversation elements to allow users to configure the game. For example, maybe they want to wait four seconds between commands instead of two, and maybe they would like to change the voice of who delivers the dialog. Those are features that would lend well to the conversation design to collect and update user settings.