In my kids' lunches, for the past couple of years, I've been including index cards with the next installment of a game I call "Lunch Life." Each day presents them with a card that has a fresh description of the world and characters. On the back of the card are between one and three choices that they can select. Every night, while I make their lunches, I read their choices for that day and think up the next installment. This is a mirror of a game I played in class with my best friend in seventh grade. For two years, we attended an tyrannical private school filled with angry rich kids and we both felt wildly out of place there. Passing these notes back and forth with me as "dungeon master" was one way we managed to escape.

Since then, building out Zork-style adventure games has always my favorite way of learning a new platform. Games like that present you with a nice collection of challenges, including data structure, text parsing and even concurrency. In this case, the inspiration also came from the Echo itself, and the idea that a natural voice interface could be just like a real interactive session with a live "dungeon master" style partner. Keeping track of the descriptions proved difficult for the kids who played, so I figured out a way to dynamically update a web page with a map as the game progressed.

Another source of inspiration for this project is the idea that the stories can be serialized. It would be so fun to add chapters weekly, and allow players to progress through a long and gradually unfolding tale as they master each weekly installment.

What it does

The Echo will gradually unfold the details of a story for each player. Events in the story play out according to the players' decisions. Meanwhile, on a companion browser, a map of the world gradually takes shape in direct response to the players' choices.

How I built it

All code was written in Sublime Text. The processing is entirely serverless (Amazon Lambda). Persistence of game state is maintained through Amazon's DynamoDB. A simple shortening algorithm takes a hash of the user's session id, which is then encrypted and stored alongside the user's "shortcode." The shortcode is the key of entry for the interactive graphics that play out on a companion browser. Messages are published to an iot queue by the alexa skill and the queue is subscribed to by the web interface. The queue "channel" is uniquely identified by the player's shortcode.

Challenges I ran into

Figuring out how to dynamically update the web browser was tough. There's a lot of sample code available out there for publishing to iOT from Lambda, but most of the examples make such heavy use of intermediary frameworks that following the sample code will only enable one to reproduce exactly the project described. Luckily, I happened on the Paho MQTT library, and managed to step through the Amazon docs to the point where I understood the underlying plumbing. From there, the challenge was to create a hash that would uniquely identify the user's channel and prevent crossed streams in gaming.

I also found it much easier (and cheaper) to de-couple the game logic and create a web interface for testing, rather than relying on Echo itself or the Amazon developer testing harness. Far simpler to inspect the console on Chrome than to repeatedly refresh the Cloud Watch logs from Lambda.

Also, creating the game was a bit tedious, so I built an authoring environment.

From there, the challenge was to properly account for all permutations of user decisions and events in the game.

Examples: What if there's a pair of magical eyeglasses that change the way SOME things look, but not everything? What if entering a room has an effect on the room? What if using one object has an effect on an object far away? How best to make non-player characters follow the main actor around the room.

Accomplishments that I'm proud of

The companion screen was really exciting to get working. I haven't seen that before. The world is big, so there's every chance I'm not the first to do it, but maybe I am! Also - I'm pretty sure I was the first to ever create a choose-your-own adventure for the Echo. The bones of this game were built the month that the Echo released. I wasn't able to get it to the point where it could become public back then, but it worked! And my son's friend Carol has the distinction of being the first person ever to play a voice-activated choose-your-own adventure game! It's something that'll never be documented (unless Amazon has a record of this skill in test mode from two years ago).

What I learned

It's a bit confusing the way that Echo deals with...uh...confusion. What you ultimately have to do is train it to respond to actual nonsense sentences (literally: combinations of random words). That way, when it encounters something it doesn't recognize, it lumps the utterance into that category.

What's next for NatterBot

The authoring tool does everything except publish the interaction model. So I think that's probably next. Also, I need to refine the event processing logic, and play around with prosidy a bit to make Alexa come across as more of a storyteller. You can talk to non-player characters, for example. So if you encounter a fairy (for example) Alexa might adopt the high, squeaky voice of a fairy. Or, if you enter a dark cave, Alexa might start to whisper. I think that'll add a lot of life to the stories.

I also want to include more audio effects, and build that mechanism into the authoring tool.

Share this project: