History

Restaurant Finder was one of the first Alexa skills I created in 2016. The unique aspect of this skill is it's ability to filter results based not just on type of cuisine but also qualitative aspects like "good," "expensive," or "open." The skill had low but consistent usage. For this contest, I decided to review customer interaction with this skill to see how I could improve it. I looked at reviews that had been left on Amazon, and integrated by skill with Dashbot, an analytics platform that allowed me to see how customers were using my skill. Based on review of many customer sessions I found the following areas that I wanted to address:

  • The skill required you to repeat your entire search to filter results, as opposed to just being able to say "good" after you had requested Chinese restaurants
  • There was a clunky feature that asked users to say their ZIP code so the skill could remember their location for future searches
  • The skill had been ported for Canadian users but did not work - it hit an error after being launched
  • There were no cards or other interaction with the the companion Alexa app or a screen display
  • Interactions were not fluid - sessions would end after the user viewed their first restaurant, or if the skill couldn't understand the customer's input
  • Results were canned with no randomization of responses - for example the response to a LaunchRequest was always "you can say find a cheap Chinese restaurant in Seattle."

Conversation Flow

To take the problems I had with conversation and guiding the customer through my skill, I drew out an explicit conversation flow and included states into my lambda function via the alexa-sdk. The conversation flows through a few different states:

  • Empty: This is the state when the user first launches the skill, before they have done any searches
  • RESULTS: In this state, the user has done a search. The attributes.lastSearch and attributes.lastResponse fields are filled with the search performed and the restaurants found
  • LIST: The user can read results five at a time. While are reading results, they are in this state. They can move forward and backwards through the result list. Note that if there are five or fewer results that match the search terms, the user will go directly into the LIST state after doing a search. attributes.lastResult.read keeps track of where they are in the list. The full list is displayed on the device screen, if it supports the display directive
  • DETAILS: Once the user has found a restaurant that they want to hear more details about, they enter this state. attributes.lastResponse.details keeps track of which restaurant in the list is currently specifying details. The user can go back to the LIST state from this point, or can do a new search. Details are presented to the user via speech and a companion card, including a hero image.

Other Updates

  • I integrated with Alexa's built-in Device Location API. If the customer doesn't specify a location, they are told that they can either allow the skill access to their current location or they can say the location that they want to search in.
  • The skill uses different localized strings for Canadian users which not only fixed the invocation problems, but also provides local examples like Toronto or Vancouver when providing help
  • Help is now topical based on the above-mentioned state that the user is in
  • I sprinkled random responses and options when the skill communicates with the user. There are now 240 different responses that the skill can utter when given a LaunchRequest
  • The skill integrates with the Google Maps API to convert a postal code to a city name for customers that perform a ZIP code search
Share this project:

Updates