I had the idea to build a a skill for learning english and playing with my kids by guessing the name of animals using their sounds.

What it does

The skill proposes to select between Jungle or Farm animals: the user has to select the type of animal The skill plays a sound and the user has to guess the corresponding animal (lion, gorilla, wolf, monkey...) If the user doesn't find the animal, the skill will propose a tip If the user finds the animal, the skill will spell the name (for learning the word), displays the image, plays a smart motion choero and will provide few explanation about the animal. At the end, the skill will propose to continue playing with jungle / farm animals or will propose to quit.

How I built it

I built the skill using Alexa Conversation for the dialog, then I used APLA for the audio and leveraging the soundtrack bank for the sounds. I added the APL for displaying images and smart motion API. The lambda is fully developped in Python and knowledge base is stored in JSON.

Challenges I ran into

I started to learn the skill development few months ago and I had to learn everything for building the skill. Understanding the APL development and the use of smart motion API was not easy... The Slack community has been very useful.

Accomplishments that I'm proud of

Understanding the development of APL, APLA and the smart motion API has been an achievement in my learning journey.

What I learned

Everything! From the dialog till the audio and display parts.

What's next for animal finder

I would like to add new animals and more user interactions.

Built With

Share this project:

Updates