Inspiration

During COVID-19, our ability to exercise and monitor our own workouts was (and still is) significantly hampered. Many people last year spent the majority of their time indoors, and away from the gym. With the closing of nearby gyms and fitness centers, we did not have access to personal trainers who would motivate you and monitor your workout and diet. To address this problem, we aimed to build a software where users can design their own exercises and track the progress of their exercise.

What it does

Our application mimics a fitness trainer and monitors a person as they do a certain workout of their selection. The AI analyzes the posture of the individual throughout their exercise and maintains an active count of the number of repetitions. All information, such as reps and sets, is displayed to the user through a user-friendly GUI.

How we built it

Our entire project was built using Python. We built multiple windows using PyQt5 and QtDesigner and linked them together. In the backend, we utilized a publicly available Github repository that contained a PyTorch implementation of the PoseNet model, which allows for real-time pose estimation. Then, we built a custom algorithm to extract meaningful information from the pose coordinates. Our algorithm allows us to compare the pose of the user to certain exercise poses in real-time. For the training data, we recorded ourselves doing jumping jacks, squats, and weight curls in many forms, to account for variation in workout form. The algorithm then verifies that the pose of the user is an acceptable form of the exercise based on the training data provided.

Challenges we ran into

Throughout our project, we faced many obstacles. We had some difficulty finding a good algorithm to perform real-time pose estimation in python. Initially, we used a model directly with OpenCV. However, this model proved to be too slow and there was no way for us to run it on our GPUs. We then found a GitHub repository containing a PyTorch implementation of the PoseNet algorithm which allows for real-time pose tracking. We also faced difficulties trying to train our model. Due to minimal training data, we needed to find a few-shot method that would perform well. In the end, we chose to calculate normalized pose embeddings and use distance as a measure of pose accuracy.

Accomplishments that we're proud of

Initially, we used an inbuilt Opencv feature to detect the different points of the body that could be used to determine poses in workouts. However, the Opencv module was running at only 0.9 frames per second. At this framerate, it would be very difficult to detect any patterns at all. This prompted us to search for a faster model, preferably using the graphics card for faster training and predictions. After searching for about 5 hours, we stumbled upon an implementation of PoseNet and were able to run it using NVIDIA’s Cuda, and our program was running faster with higher efficiency. We are also proud of the accuracy of our pose embedding algorithm. This indicates that the features we picked out are meaningful.

What we learned

We learned about a new framework for building effective GUIs named PyQt5. We also learned about the various machine learning models that perform pose estimation and the varying convolutional architectures behind them.

What's next for Virtual Workout

We plan to add additional features to our software to allow for the tracking and counting of other exercises like high knees, jump ropes, and more. Eventually, we also hope to include a calorie burned counter to show the number of burned calories from each workout. The calories burned would be calculated based on the user’s height and weight.

Built With

Share this project:

Updates