Table 22 - Puppeteers
Team: Stephen Covell - Machine Learning software creation and research James Frost - Integration between machine learning and the http API and the camera feed Richard wheatcroft - Branding, Pitch Deck, Researcher and plank prop!
Implementing new technology for pose recognition and gesture control to help different use cases. The posture recognition and checking was inspired by experiences at the gym where it can be hard to know if you are doing the movement of the exercise or lift correctly.
The main inspiration is to help boost users confidence, lower the chances of injury and improve the users ability to exercise better and be able to lift more.
Other use cases: Nhs (reduce emissions and help the NHS go carbon neutral) Sign language
What it does
Explores how posture recognition / correction, can be a incorporated into new apps and use cases. We also have an additional use case of gesture control to use to control a PowerPoint presentation for example using your head or hand to move to the next slide, go back to a previous slide and highlight a part of a slide.
How we built it
We used computer vision through mediapipe / opencv to track the posture of a person. Taking the coordinates of the joints, we trained a model to recognise a users posture and gestures. After a long period of model evaluation and improvement, we arrived at the models we have today. Part of this was considering what parts of the data were relevant, and handling the many errors that can occur.
To predict each gesture, the algorithm has to run through the machine learning model and try to classify the gesture.
When the trained model is sufficiently confident in an action, it sends a POST request to a server which causes an action to happen, such as proceeding though a slide show or informing the user that there posture isn’t correct.
With the machine learning algorithms, there were roughly 2000 parameters that the machine had to learn from build a relationship between the elbow and forearm for example. In addition to this, for each frame of the video, the machine learning is also trying to build a relationship between each part of the body and the movement.
Challenges we ran into
- The machine learning model was hard to train.
- Doing many planks for training is tiring!
- App development in a day was un-realistic for 24 hours, so we scrapped it (but would have done it with more time).
- Classifying gestures is harder than expected.
- Managing time, with 5 subprojects (web app, http client, gesture recognition, head and hand swiping, pose recognition).
- Forming a group, as we all didn’t know each other. Then deciding on an idea as a group
Accomplishments that we're proud of
Formed a team, met and worked with cool people. Combining many different technologies into a complete product.
What we learned
Learned how to use a machine learning model and integrate different software together.
What's next for Puppeteer
Expanding number and types of gestures and posture poses that it recognises and corrects.
Look into the social impact, and build a more marketable "story" around Puppeteer.
# Use Cases
Posture correction / Correct pose recognition:
- Exercising and lifting weights in the gym
- In sports, like Improving your golf swing for example
- Dancing to improve the execution of your moves
Log in or sign up for Devpost to join the conversation.