Inspiration

While brainstorming for the project, we wanted to address issues of quarantine. Since many of us sit in a chair for most of the day, we wanted to work with computer vision to correct sitting posture. We then decided to shift from passive posture to a more active posture correction, related to exercise and weights, because we wanted a more interesting and challenging problem.

What it does

Our program uses a neural network to detect key points on the human body. Then, we use these points to isolate the shoulder and elbow. The program then uses trigonometry to calculate the angle between the user’s arm and the y-axis of the image. The relationship between the angle and the resulting accuracy score is inversely proportional, so the greater the angle, the lower the accuracy score. This makes sense because if your elbow is too far forward or too far back, the form of your bicep curl is worse

How we built it

For our front end development, we used Wix to make a sophisticated website, and Snap! to make an interactive animation. If you look closely at the animation, you can see the resulting accuracy score in this model. For our back end development, we used OpenCV for accurate image processing and PyCharms to pass our Deep Learning model.

Challenges we ran into

One of our original ideas was to use Adobe Animate to create the interactive portion of the website. Due to technical difficulties, only one member of our team was able to run the software. Additionally, none of us had experience with the program, making the programming difficult. We also had trouble implementing it onto our website. Eventually, we decided to use Snap!, which we were familiar with, and which also had much easier web integration.

One of the biggest issues was getting the neural network to work with PyCharms. While there was clear documentation of how to pass the model to determine the posture landmarks, downloading the model was difficult because it required the use of “sudo chmod” in the command line. Unfortunately, this is not possible for a Windows laptop so we spent hours researching a way to download the models. Eventually, we came across the Cygwin tool and after some more hours of manipulation, we were able to successfully locate the posture landmarks.

After our program worked successfully on still images, the next challenge was making it work on videos. Since it took around 4.5 seconds to process an image for our program, we knew we would have to run a script to select specific frames after accounting for this time. We tested a program that could do this and in theory it seemed great. However, when we tested it out on a video, it was not working at all. After 3-4 hours of deliberation and thorough testing, the simple solution was that the video file we were using was not of the right type. After changing it, it worked.

Accomplishments that we're proud of

We are proud of utilizing a Deep Learning model into our code, creating an engaging and interactive website, and creating a useful product with visible benefits.

What we learned

While working on our hack, we learned how to implement deep learning algorithms into Python script and we learned how to detect key landmarks in an image and how to dissect them. In addition, we learned how to create an engaging website and how to write HTML scripts to construct interactive models.

What's next for ExerHelp

Our current limitations include processing time, scope of exercises, and our experience with web design. We would therefore like to utilize Amazon Web Services for faster speed, learn about, test, and implement more exercises, and learn how to implement python code into our website.

Built With

Share this project:

Updates