More people were downloading our UX Companion iOS and Android app than we had ever seen before. Clearly, there was a huge demand for our app's mission - distilling the complex terms, theories and techniques of user experience (UX) into a dictionary-style app that digital professionals could access, anywhere.

But our mission was far from over. At our award-winning creative agency Cyber-Duck, we're constantly asking ourselves how we can make UX more accessible for all. What else could we do to help users consume our content faster, with less friction? Were there other technologies that we could leverage so even more users to learn from our app?

We thought so. At our internal hackathon last summer, one of our goals was to learn more about the challenges of designing a screen-less user experience. The result of our intense 48-hour effort? Creating a voice-activated Amazon Alexa Skill for our most successful product ever, UX Companion.

Why Alexa?

Amazon Echo and Echo Dot were big hits last Christmas. The voice-enabled speaker acts as an intelligent personal assistant and an automation hub for your home. Alexa, its voice-based persona can be trained to learn new skills to assist you with all sorts of tasks from telling you the time, checking your calendar, controlling your lights at home, to reading audiobooks.

This is what sparked our idea for UX Companion. Instead of searching through the app for a term… wouldn't it be easier just to ask Alexa to tell you all about it?

UX Companion on the Amazon Echo Dot

UX Companion: The Alexa Experience!

UX Companion is our glossary-based app for UX tools, terminology and theories. Since we launched it in 2014, it has been downloaded over 50,000 times in over 100 countries. Available on both Apple and Android, it has evolved into a hub that UX practitioners, designers and marketers go to understand more about UX.

But we’re keen to constantly expand the UX Companion experience for our community. Two new terms are added every month. With the growth of voice search, we asked ourselves the question: How could we use voice-based interaction to improve the experience of our app? The answer was speed!

Instead of opening the app on your device and searching/scrolling the glossary for a term and reading through it, you could simply ask your question, hands free, within proximity of an Alexa-enabled device. This means you can continue doing whatever task you are doing whilst listening to Alexa tell you all about it. A designer's dream!

Challenges of Screen-less UX

One of the biggest challenges of designing an application with no UI is the removal of that all-important context that screens often provide. Instead, design becomes based on emotion and language.

When we wrote and curated each article for UX Companion, we did this based on the understanding it would be read by humans within the context of a desktop or mobile screen. We hadn't designed it to be experienced without a screen. Therefore, we had to analyse the cadences in the way Alexa spoke and tweak words or phrases that didn’t quite sound right when spoken aloud.

We did this using Alexa's VUI. Alexa prompts the user to ask a question, for example:

  • “What is user experience?”
  • “Can you define cognitive load?”
  • “Tell me about eye tracking.”

We decided to keep responses brief and easily consumable. First, Alexa reads out the high-level summary of the term. Then, she asks if users would like to hear about a topic in more depth or they can move on to their next question.

Speech synthesis markup language (SSML) allowed us to target specific words and phrases ensuring they were pronounced correctly with phonetics. At the same time as ensuring the user understood Alexa, we had to make sure Alexa was flexible enough to understand the user and their various utterances. For example, one user may refer to “User Experience” and another to its abbreviation “UX”. We needed to make sure that if a user mentions either version, Alexa recognised and triggered the intent of the user.

Overall, we discovered nuances with which language is spoken is one of the biggest challenges of designing without the context of a screen. This challenge could be easily solved with good UX if it was presented to the user on a screen. But without a UI we have to think much more deeply about how we teach Alexa to understand the user.

Development

The UX Companion skill was been developed using AWS Lambda and the Alexa Skills Kit SDK for Node.js. We built a simple model–view–controller (MVC) structure on top of this to improve readability and maintainability.

We defined the logic for the user journey through several states. This allowed us to manage the distinct behaviours required from Alexa across different stages of the application, despite users speaking similar commands throughout. Our vision is to switch the data source to a public API in future.

The future

It's an exciting time! If they aren’t already, we strongly believe designers, developers and marketers should be looking into potential ways they can use both voice and motion to engage with their customers.

Share this project:
×

Updates