We wanted to highlight the awesome new capabilities of the new Entity Sensing and Motion API's and thought what better place to do it than in our Flagship skill!
Pointless is a game people love to play every day, and by adding these motion and sensing features, their device can follow them as they carry on with their daily routine.
What it does
Pointless is a popular game show in the UK, the aim of the game is score the lowest amount of points possible. To do this, you need give the most obscure answer possible.
Users will be played a question, for example: "Name a currency of a country bordering China". A background bed music will be played and they might shout out: "Alexa, rupee", and have the answer revealed to them.
The skill provides a daily challenge for regular players, each daily challenge has 3 questions and all the questions are based off the same category.
If you're a subscriber, you also have access to multiplayer mode as well as challenge mode - where you can play through the entire Pointless question set, you can also compete on our global leader board and play the "final round"!
How we built it
Pointless was built using the ASK-SDK, in NodeJS using an AWS lambda function, we store our audio and visual assets in S3 buckets and persistent user data in DynamoDB. For our daily leaderboards, we used Amazon GameOn.
Pointless uses APL to render the different views, and in some sequences such as the countdown - we use some APL commands.
The Pointless countdown is one of the main features of the game. We built it using a sequence of "SetPage" commands which would execute in parallel with "SpeakItem" commands, which played a certain duration of the Pointless countdown audio, depending on how many points you scored for your question.
If you get the answer wrong, we use the ControlMedia play command to show a losing animation video.
We recently added some Node 12 features to improve the latency of the skill, namely by using Promise.allSettled in places where there are lots of API calls.
Challenges I ran into
Pointless poses unique problems with answer recognition. This is because the user inputs are potentially infinite since we give people open ended questions with no pre-defined options.
In testing, we'd notice users say answers that weren't right, and because they weren't in our interaction model they would easily get misheard and misread back to the user. User's would get frustrated since their answer isn't being read back to them properly - even though it's wrong!
For example, if the question was "Name any planet in our universe" and the user kept shouting out "pluto" (not in our model). The skill would not understand and come up with some variation.... "I heard: Luton, is that your final answer?"
To get around this we combed through every question in our set and came up with elligble incorrect answers, these are answers that are wrong, but could be understandably given by players.
This strengthening of the interaction model showed in testing to decrease the chance of Alexa mishearing.
Accomplishments that we're proud of
We're proud of the fact we managed to incorporate the new ScreenImpactCenter choreo feature, this was difficult due to limitations of the the simulator in the developer console.
We had to add in another APL command of type: SmartMotion:PlayNamedChoreo into our existing Pointless countdown sequence, this was initially tricky due to the amount of APL "pages" that make up the sequence, navigating through these large blocks of JSON can always be cumbersome.
We added this command to the end of the losing video animation. So now the screen does a rapid side to side motion, it's like the device is shaking its head at your answer!
What we learned
We learned a lot about approaching updates in large infrastructures.
The Pointless codebase was already quite big, and we didn't want to crowbar these updates into it, this could cause the codebase to become increasingly hard to maintain.
The main issue was that for each APL view, we reference a predefined JSON file, this was inconvenient since we needed to add the relevant "settings" keys and "onMount" keys in every file so the Entity Sensing and Motion would work. Initially we just copied and pasted the block of JSON into the 20 + APL JSON files.
This blunder made testing tricky, and we couldn't be sure if we had inadvertently introduced some human error.
We decided to refactor our APL rendering functions, so when the view is rendered, they default to have the "settings" and "onMount" keys and properties in them.
What's next for Pointless
We're looking forward to seeing more motion API choreos to implement in Pointless, we'd like a new choreo that we could use as a "correct answer", this would mean the motion features wouldn't be shown exclusively to users who answer incorrectly!