Two years ago I developed a home fitness mobile app. My original idea was to create a voice-based user experience, so that users could enjoy their workout without tapping the smartphone screen during each exercise. The final product was pretty good and the few users liked it, but the mobile fitness market was too challenging and I was alone, so I decided to give up the project. Few months ago I found out Amazon Echo and it was love at first sight, as I am a big fan of conversational technologies. I thought it was the perfect platform for my app, so I decided to develop the HiFit skill for Alexa.

What it does

HiFit turns Alexa into a smart personal trainer, providing a bodyweight training plan that users can do at home, without any particular equipment. HiFit workouts designed to tone the entire body and is suitable for everyone: men or women, young or middle aged. With the 2.0 version is also possible to choose a workout for a single muscle group, like abs or glutes.

How I built it

I am a Python Ninja, as I started writing Python code when I was a kid, so I wanted to use it to develop HiFit. I found the great Flask-Ask project by John Wheeler, and it was perfect for this task. After playing with the Alexa Skill Kit for a bit, I moved on to the design of the conversational interface. I immediately noticed that it would have been quite complex, so I used a free online diagramming tool,, to prototype it. I used a fully serverless architecture, combining AWS Lambda to host the code, AWS DynamoDB to store training sessions and AWS S3 to host JSON-encoded workouts and media files. Seven days later I had the working skill, but still workouts were missing. I am not a qualified personal trainer and I can not create them myself. Therefore I contacted a functional training expert who agreed to provide me the workouts for the skill.

Challenges I ran into

The development part was pretty easy, as the Skill Kit is well documented and John Wheeler made it even simpler with Flask-Ask. The most difficult part was the design of the Voice User Interface (VUI). I already had experience with a Conversational User Interface (CUI), as my portfolio has bots for Telegram and Facebook Messenger, but I found many differences in using voice rather than text as output. Unlike a chat, with a vocal interaction the user hears a message only once (maybe two using Alexa reprompt). Therefore the message must be clearly understandable, particularly if it requires a response from the user.

Accomplishments that I'm proud of

Since the first week after the release I noticed a high rate of completed workouts, much higher than what I was used to see with my previous fitness app and bot. Being aware that hundreds of people were using something I created to take care of their personal health gave me enough motivation to carry on with the project.

What I learned

I learnt how to use a serverless architecture with AWS Lambda and DynamoDB, reducing to zero the system administration work. This is very important for those, like me, who work alone. I also learn how to design a clear and consistent VUI, avoiding bottleneck and always trying to make it natural, to create the illusion of a real conversation.

What's next for HFit

A good training program should be followed by a good diet, but many people don’t have the right knowledge to do it. That is why one of my future goals is to add short and concise nutritional tips at the end of each training session. Moreover, last year I developed an adaptive workout algorithm for my app, which basically creates training plans that fit perfectly the needs of each specific user. My aim is to improve it to make it the heart of HiFit. I am also working on a new skill, using my new experience and the work I already did with HiFit. I named it HiYoga and, as the name itself says, it will turn Alexa into a Yoga trainer.

Share this project: