Animal Categories is a skill that provides information about the general animal categories. The skill also gives a short explanation and quality images of few animals that belong to each animal category. This is a voice first skill that can be used as a reference for education in echo dot, echo show, echo spot and/or other medium to large devices in Alexa. Animal categories can be best explained with images. The images with short and concise description makes it easy for kids to understand and show interest in the content. With Alexa APL and its multimodal ability, it is convenient to display images, text, and provide more details in an animal category with examples than is possible with voice alone.
Animal categories shows various animal categories with examples and images. Upon selecting a particular animal category from Mammals, Fishes, Reptiles, Amphibians, Birds and Bugs, the skill not only provides details of that category but also gives details about the habitat, characteristics of a particular animal belonging to that category with quality images. The images in the main category have animal sound on touch/click . This skill is designed for echo , echo show , echo dot and other large devices. Upon selecting the category, the skill gives a brief description about the animal's habitat, characteristics and the feeding habits. The skill also provides hint for the user to select the next animal or the category. The skill shows 2 slides per animal that not only describes the animal, but also provides quality images and necessary hint to prompt the user.
The skill is built using Amazon AWS and stores the images in s3 buckets. The Lambda functions for retrieving the images and invoking the right intents for user utterances and logic is built in node.js . The skill also uses APL documents and multimodal ability to display the contents in an optimal manner in various small, medium and large Alexa enabled devices.
Though it has been quite a challenge to design the skill in an optimal manner , the use of APL has made it a bit convenient and easier for image display , user prompts using hints, ssml enabled speech and other texts. The use of Touch Wrapper inside the skill was quite a challenge and after hours of going through and changing the code, made it work. Hence the skill is voice enabled and these of Touch Wrapper is less comparatively. Learning and going through APL documents features help me design a skill rich in images and texts. Knowing the audio, video and image capabilities with Alexa APL document , I can expand animal categories to include short videos explaining the habitat, sub-species and characteristics of each animal belonging to the category. I can also include more animals belonging to each category.
Log in or sign up for Devpost to join the conversation.