The massive amount of titles on the market makes entertainment discovery extremely time-demanding and overwhelming. itcher solves the problem with instant and personal movie, TV show, music, book and game recommendations, generated by leveraging ratings and reviews from its user community.

What it does

The itcher Skill provides a voice interface for itcher, seamlessly integrating with all other supported platforms (Android, iOS, web) and allowing users to access their own personalised book and movie recommendations directly via Alexa.

How it was built

The itcher Skill is developed with a combination of ES6 and Node.js, and leverages the existing itcher API (mostly written in PHP) to generate user recommendations and fetch titles, descriptions, reviews etc.

Challenges we ran into

The main challenges we encountered while developing the itcher Skill are:

  1. Time constraints: compared to the traditional GUI, where visual feedback helps letting the user know what is going on while results are being generated, the voice interface requires very quick responses to keep the conversation live and avoid user frustration. As a result, we had to squeeze every bit of speed from the recommendations engine in order to make responses as fast as possible.
  2. Lack of input contexts on Alexa: compared to other, more recent platforms (e.g., API.AI), Alexa provides a more basic infrastructure to build conversations on top of. Specifically, to develop the conversational flow we had in mind, the itcher Skill needed to have different stages, in order to offer different options to the user depending on the conversation state, as well as react to user input accordingly.
  3. Recommendation filtering by genre: due to the non-visual nature of the voice interface, we couldn't present all available genres to the user without ending up with our Skill reading a very long and boring list. To overcome the issue, we implemented a mixed mechanism, which tells the user the two genres that are most likely to contain her best recommendations, but is also able to fuzzy-match available categories against free user input.

Accomplishments we are proud of

The final version of the itcher Skill manages to overcome a lot of current NLP limitations on Alexa (lack of context, relatively frequent input misrecognitions) by implementing a significant amount of internal logic, which effectively keeps track of conversational states, in order to:

  1. drive the conversation (with relevant options at each stage),
  2. provide guidance to the user (as well as offering context-specific answers to 'help' requests),
  3. suitably re-route misrecognised intents whenever possible.

What we learned

In the process of developing the itcher Skill, we learnt a whole lot about VUI development and design best-practices. Moreover, as a result of tailoring the voice interface to try to provide the best user experience, we actually ended up re-thinking and improving parts of itcher GUI as well (on Android, iOS and web).

What's next for itcher

We have plenty of ideas about how to expand and improve itcher VUI. We will start by adding TV show recommendations to the Skill, later followed by a voice-friendly mechanism to use the recommendation filters already available in the itcher app (e.g., to receive only movie recommendations which are available on Amazon Prime, or Netflix etc.).

Share this project: