Inspiration

Most gym-goers, especially beginners, don't know if they're completing their exercises correctly. For marginalized communities without access to coaches or personal trainers, this can be problematic. Not only may you not get the full benefit of the exercise, there is also a good chance you may get injured: take it from one of our team members Mohit for example. He once hurt his shoulder whilst doing pushups due to improper elbow positioning as he wasn't aware of how to properly do the exercise. Additionally, beginners who do not know proper form may feel intimidated to go to a gym for this very reason. We wanted to create an app that would automatically summarize your reps, giving a detailed breakdown for each one using a custom AI wireframe model, helping users improve their health and wellness and reach their personal fitness goals.

What it does

Our app, RepRadar, features a responsive frontend, accessible for all screen layouts, with the ability to select an exercise between Pushups, Jumping Jacks, Squats, and Lunges. It will then transfer a continuous stream of video through WebSocket. The backend model receives the websocket packets, generates an output live using our custom cv2 wireframe model, and sends JSON output back through websockets. The frontend then displays a detailed breakdown on the overall performance, rep count, and information for each rep (score and additional information).

How we built it

The frontend was built using React, with a design that is responsive and accessible for all devices. The frontend connects to a WebSocket hosted on a custom domain and sends a continuous stream of video, broken up into Base64 images sent at 10 FPS. The frontend also sends a JSON packet containing the exercise type selected by the user, which will determine which cv2 algorithm to run. OpenCV/cv2 takes in incoming Base64 images from the client and decodes them, converting them to RGB. It then renders the wireframe UI in the backend. MediaPipe handles pose detection, joint angle tracking/calculation and body landmarks. Our backend algorithm then uses the angle calculations to determine rep count, score, and generates notes.

The algorithm continuously runs, grading and providing notes for each individual rep, which are sent back to the frontend. The frontend then displays the results of each rep and shows a summary of the entire exercise.

Challenges we ran into

As this was our team's first hackathon, we had to overcome some obstacles when initially getting started with our project. Additionally, while we chose our app idea because it's something we are all genuinely passionate about, we've never used WebSockets before, or made our own cv2 wireframe model.

We ran into difficulty trying to connect our frontend and WebSocket server, which led to us having to host our server on a custom domain and use wss://. Trying to connect simply via IP and ws:// wasn't possible as we were using https:// for our frontend server and connecting to ws:// via https:// isn't allowed.

We also ran into difficulties trying to tune our models parameters, as depending on the angle, the thresholds for a complete/incomplete exercise would display inaccurate % scores and rep_counts.

Accomplishments that we're proud of

We're proud that we were able to connect everything to make the AI detection work accurately. In particular, we were very happy that we were able to host the websocket server and correctly send a continuous video stream to and from the server, especially since we've never worked with websockets before.

Most of all, we're proud that for our team's first hackathon, we were able to create a fully functioning app, from frontend, to webserver, to cv2 model, and having it work on all devices for a multitude of exercises. Over the last day we've been able to make the model fairly robust, and detections even from different angles.

What we learned

In terms of technologies, we learned a lot more about React frontend development, WebSocket servers, Flask, and how to make and fine-tune OpenCV/MediaPipe wireframe models.

We also learned good time management. We've never worked under a 24 hour deadline before, especially when we also had to learn and implement new technologies, so making sure we were on the right page was crucial. We all worked on different tasks, but knew what the others were doing, so when it came time to combine everything together, despite facing some difficulty initially, we were able to successfully implement everything.

What's next for AI Exercise Form Grader

We're looking to expand this by adding more exercises, and improving the accuracy of grading the exercises, in a multitude of lighting conditions, lower resolutions, and more angles. This can be further improved by adding a 5-10s demo video for each exercise showing the ideal scenario for capturing accurate results, including lighting conditions, angle, and distance.

We're also looking to turn this into a complete app available on the App Store and Google Play store, and adding a history feature so users can see their long term fitness form trajectory. This would involve hosting our model on the cloud, likely through a service like AWS or Vercel. Our end goal is enabling people around the world to not feel intimidated by lack of knowledge when starting their health/wellness journeys. By using our app, people can confidently improve their form.

Built With

Share this project:

Updates