On the Amazon Alexa Skill Chalange I entered the first Version of this skill It has been capable of account-linking the Skill to the voicemail system of Deutsche Telekom landline access in Germany. It announced the amount of voice messages and a user could query an individual message and the skill made the meta data announcement and played the message with the audio player. And then it ended because of the technical limitation of the audio player. So it was a natural demand to query all messages at once and do have to restart the skill after hearing a message.

What it does

On devices with screen it is rendering a touch interfaces for controlling the voicemail, too. And to be inline with that it uses the video component and commands to play a voicemail message without leaving the UI by switching to the audio player. Besides selecting the message by touch, you can select it by its ordinal number. This is just the non-screen behavior adapted to APL. Additionally, APL Command Chaining enables me to play all voicemail messages WITH their alexa voice announcement by speak items in-between. This was not possible before, because an audio player does not provide intercepting the elements of a playlist.

How I built it

The APL document includes a hidden video component. This video component is started with the URL of a voicemail message. For a consistent UI, i switch the image with the play button on it to an image with a pause button. And hocking to the video component to switch it back, when the playing ends. (for this a global index is stored in the document).

Challenges I ran into

Hocking to the Video Component breaks the sequence of the custom command for playing all messages, because of the attached commands to the video component events. Therefore I added a second hidden Video Component for playing all and switching the images inside the command sequence without hocked commands on events.

I also run into an APL-A bug on pure speakers (without a screen). I made the feature also possible with an alternate APL-A implementation, but that code only runs on screen devices (where I do not need it) but does not run on pure speakers. So I can't make this feature available on pure speakers. (see what i learned).

Accomplishments that I'm proud of

The prior existing functionality using the audio player is partly exchanged for screen devices and improves the behavior there, while the old functionality is still available on the pure speakers.

What I learned

How to use APL Commands and Transformers and their limitations. Especially on Echo without screens it is not possible to replace the feature with APL-A. First the limit of 240s would mean to count the length of the messages and make badges / bundles and at the end ask if the next pile should be played. But further more there seems to be a bug on APL-A that it breaks on real devices (without screens) if the external hosted MP3 is followed by the next announcement - APL-A code is working on simulator, APL-A editor and real devices with screens (tested Echo Show 5, Echo Show 2). But on a real device without screens (tested on Echo 1gen, Echo Studio, Echo Dot 1gen & Echo Dot with Clock 3gen) ends in an infinite running blue circle :-(

What's next for SprachBox:

The Skill already supports personalization to support different voicemail accounts by matching it to the users voice. It would be great to support this account linking by direct app-linking with my iOS app supporting the same use cases.

Built With

Share this project: