Inspiration
The concept of transitioning things that we previously did often into something that could be comfortably done at home has been extremely prevalent due to recent events. In addition, with people staying indoors lately, health has been a major concern. AutoPT seeks to solve that issue. In addition, a major factor why people don't do things that are considered healthy like exercising regularly and cooking at home is due to convenience. AutoPT seeks to address this as well by making the experience as smooth as possible using machine learning.
What it does
There are two parts of this: AutoPT Exercise - an online personal trainer that allows you to browse through a library of exercises and uses pose recognition to give you real time feedback on them. Only a webcam is needed to run this website. Using the poses, the website also creates a basic gesture recognition system that allows you to do things like skip an exercise without having to go back to the computer. The library of exercises is automatically generated from YouTube videos, allowing it to be easily expanded upon. AutoPT Exercise aims to allow people to better improve upon their exercises through more accurate form and better pacing, along with guiding beginners through exercises that they otherwise would have never learned.
AutoPT Nutrition - encourages people to make healthy meals at home by automatically generating nutritional information and recipes for meals. The experience is designed to be extremely simple: all the user needs to do is hold up their food item or a barcode of it to their webcam and the website seemlessly recognizes it. Nutritional content in the form of the percentage of the daily recommended amount is automatically shown, along with an array of recipes using those ingredients.
How I built it
For AutoPT Exercise, the pose recognition is a model called "https://arxiv.org/pdf/1611.08050.pdf" ran through OpenPose. The score heuristics used is cosine similarity, so as to produce better translation invariance. The exercise library preprocessing is also done with OpenPose, ran in a seperate system. For AutoPT Nutrition, the recipe and nutritional information is found using the Spoonacular API. The underlying architecture of the website is built through flask and socketio.
Challenges I ran into
Barcode scanning using a webcam alone ended up more difficult than I thought, where I had to experiment with preprocessing techniques in order to better run the algorithm. Integrating pose recognition with the rest of the website was also a challenge, as I had to create a basic web streaming service that integrates the machine learning and image preprocessing algorithms.
Accomplishments that I'm proud of
I'm happy with how the nutritional system turned out, the image/barcode recognition along with the Spoonacular API formed a really cohesive system.
What I learned
Through the development of the pose and food scanner system, I strengthed my skills in web development through flask and socketio.
What's next for AutoPT
I would like to improve upon the performance of the system, probably by revamping the web streaming service. I would also like to improve the pose heuristics so that it gives more accurate and more fine-tuned responses.
Log in or sign up for Devpost to join the conversation.