I heard about Alexa Web API for Games a year ago but it was a "closed beta" and people had to apply for it and I did not, because I have no idea on what to work on. Also, I have no background to game development at all (though I consider myself a casual gamer). When I heard about this hackathon and knowing that the Web API is now in GA release, I believe this is a great time to tinker with it and learn more about it.

What it does

Ambidextrous Robot is a memory game which is similar to the Simon game. This game can definitely help anyone challenge their memory (and ear). Ambid (the Robot's name) will show you certain movements, and then you have to say them in the right order. Each round the number of moves gets longer.

How I built it

As I've mentioned, I have no background in game development. Fortunately, I was able to somehow find a 3D model that struck me, thanks to three.js examples. So I created a single page web app around it and write JavaScript API that can control it. I then started integrating the Alexa JS library and learned that it is not difficult to use (well, it is not that big and few methods and event handlers are available).

Challenges I ran into

I started with Web API but had issue on the conversation flow that I want. I then realized that I still don't have a good grasp of Dialog Delegation so I had to experiment on that. I did look at Alexa Conversations and Controls Framework. Given that they are still in beta and with not much time left in hackathon, I had to defer using them. I then decided to start with "voice first" design and so I tried to go back to the basic but also had to look at APL for Audio (because SSML has 5 audio limitation). I find APLA very amazing because it is available in every devices without the need to enable something via Model's Interface. But there's another big blocker: it does not support the dialog delegation techniques that I just recently learned (auto delegation and slot elicitation)! And so, I had to finally go with APL, which I had some experience on using it before. With simple APL, I was able to create the game logic and the flow that I want. But from the 3D model that I initially plan, I had to find a way to convert them to 2D.

Accomplishments that I'm proud of

Learned a lot!

  • Had a first taste of Web API. I can see its usefulness even for non game skills.
  • Able to mix audio using APLA!
  • Finally understood Dialog Delegation and why I prefer doing it via code
  • Used VSCode for local debugging (highly recommended)
  • Learned about APL animations. Unfortunately, it was not working in my Fire TV...
  • Learned about APL transformers

What I learned

Same with what I mentioned above. Plus I also learned a very important process: story boarding. Thanks to Alexa Twitch channel! I also learned that I need to learn Alexa Conversation and Controls Framework in the future.

What's next for Ambidextrous Robot

I have so many other ideas for this skill

  • More and better graphics and assets
  • Multiplayer support (for each move in a round, a random player will be picked to answer)
  • Speeding up the movement audios
  • Leaderboard, badges
  • Different game modes (each round, the moves will be random and not carried over)
  • More moves
  • Different characters
  • etc.

Built With

Share this project: