We were inspired by the pose estimation model in AAI's API, and saw a lot of potential in tracking movements.
Our app allows users to attempt yoga poses and compares their forms with those of professionals. It then tracks their progression to help them improve.
We built it using the always AI pose estimation API to compare a live video to a person in a picture.
It was difficult to work with and access our data as the majority of it was done on a Raspberry Pi
We are proud that we were able to get a working Python server and design an algorithm to compare the skeleton of a human in a live video frame to someone in a picture.
We learned a lot about networks, working with APIs, and using the terminal to run multiple scripts as one cohesive program.
We hope to patch our bugfixes and move onto better hardware that processes frames at a faster rate. In addition, we want to extend the yoga to more of a guided tutorial rather than just showing the user poses to follow.
Log in or sign up for Devpost to join the conversation.