We wanted to bring an assistant feature to the Pebble smartwatches we wear all day and use it in everyday life. The Amazon Alexa API was recently opened to set up with our own open-source devices so we decided to add that to the watch as an app.

Upon opening the app a microphone prompt asks you to speak your question which is caught by Pebble and written out as text. This is sent through the paired smartphone to an AWS server where our project is hosted.

Alexa's API however only recognizes audio files and Pebble only supplies text, so we had to use another converter to combine the two platforms. We used IBM- Watson and Bluemix to read out the text and send the audio file to the Alexa API.

The response audio is also sent back to IBM-Watson to converted back to a viable audio format and then to a textual string. This question and response with Alexa goes through multiple iterations but writes out on the Pebble watchface seamlessly without any need for a real speaker.

Other notable features are the app's ability to log in and connect with the user's Amazon account to provide the same personalized features Alexa is known for. The Pebble is recognized as the user's Alexa device and can be personalized using the Alexa phone and web apps. It can even learn skills and tell jokes like Alexa normally is capable of.

Since we cracked workaround to Alexa's audio-only accessibility, we want to use it to build a text-messaging based assistant next. Such an assistant would work even without internet connection and one on phones that are not internet enabled.

Share this project:
×

Updates