Inspiration

Since the pandemic, millions of people worldwide have turned to online alternatives to replace public fitness facilities and other physical activities. At-home exercises have become widely acknowledged, but the problem is that there is no way of telling whether people are doing the exercises accurately and whether they notice potentially physically damaging bad habits they may have developed. Even now, those habits may continuously affect and damage their bodies if left unnoticed. That is why we created Yudo.

What it does

Yudo is an exercise web app that uses TensorFlow AI, a custom-developed exercise detection algorithm, and pose detection to help users improve their form while doing various exercises.

Once you open the web app, select your desired workout and Yudo will provide a quick exercise demo video. The closer your form matches the demo, the higher your accuracy score will be. After completing an exercise, Yudo will provide feedback generated via ChatGPT to help users identify and correct the discrepancies in their form.

How we built it

We first developed the connection between TensorFlow and streaming Livestream Video via BlazePose and JSON. We used the video's data and sent it to TensorFlow, which returned back a JSON object of the different nodes and coordinates which we used to draw the nodes onto a 2D canvas that updates every single frame and projected this on top of the video element. The continuous flow of JSON data from Tensorflow helped create a series of data sets of what different planks forms would look like. We used our own created data sets, took the relative positions of the relevant nodes, and then created mathematical formulas which matched that of the data sets.

After a discussion with Sean, a MLH member, we decided to integrate OpenAI into our project by having it provide feedback based on how well your plank form is. We did so by utilizing the ExpressJS back-end to handle requests for the AI-response endpoint. In the process, we also used Nodaemon, a process for continuously restarting servers on code change, to help with our development. We also used Axios to send data back and forth between the front end and backend

The front end was designed using Figma and Procreate to create a framework that we could base our React components on. Since it was our first time using React and Tensorflow, it took a lot of trial and error to get CSS and HTML elements to work with our React components.

Challenges we ran into

  • Learning and implementing TensorFlow AI and React for the first time during the hackathon
  • Creating a mathematical algorithm that accurately measures the form of a user while performing a specific exercise
  • Making visual elements appear and move smoothly on a live video feed

Accomplishments that we're proud of

  • This is our 2nd hackathon (except Darryl)
  • Efficient and even work distribution between all team members
  • Creation of our own data set to accurately model a specific exercise
  • A visually aesthetic, mathematically accurate and working application!

What we learned

  • How to use TensorFlow AI and React
  • Practical applications of mathematics in computer science algorithms

What's next for Yudo

  • Implementation of more exercises
  • Faster and more accurate live video feed and accuracy score calculations
  • Provide live feedback during the duration of the exercise
  • Integrate a database for users to save their accuracy scores and track their progress

Built With

Share this project:

Updates