Motion's Creation: Bringing Precision to Yoga
Inspiration
While exploring potential ideas, MoveNet caught our attention with its capabilities for accurately tracking human movement. We recognized an opportunity: to provide wellness within reach for all through real-time feedback. Motion aspires to make wellness accessible for all. By breaking down multiple barriers, Motion allows new segmentations to receive personalized training feedback without the cost of hiring a personal trainer. Future updates to the application will ensure that anyone, irrespective of language, economic, age, or location status, can benefit.
Implementation
We integrated a camera on our platform that effectively captures joint movements using MoveNet. To understand and analyze these movements, we utilized TensorFlow and PyTorch in our backend. Our approach involved two primary steps:
- Pose Prediction: Training a machine learning model to identify the specific pose a user attempts.
- Pose Correction: Training a subsequent model to detect inaccuracies in the user's pose.
If a user's pose is deemed incorrect, our system uses OpenAI's GPT API to generate unique and personalized feedback, guiding them towards the correct form.
Challenges & Insights
Gathering diverse and representative training data posed a significant challenge. Recognizing that individuals have varying arm lengths, different distances from the camera, and diverse orientations, we aimed to make our system universally applicable. Although MoveNet expertly captures joint data in diverse scenarios, our initial model's training revealed a need for broader data. This realization led us to consider the myriad ways users might interact with our application, ensuring our model had a rich learning environment.
Built With
- javascript
- movenet
- neuralnetworks
- openai
- python
- pytorch
- react
- tensorflow

Log in or sign up for Devpost to join the conversation.