Inspiration
It started with a simple observation: most people have no idea their body is slowly losing range of motion until something breaks down. A physiotherapist charges hundreds of dollars to do what is, at its core, a geometric measurement — comparing joint angles against population norms. We asked: why can't a browser do that? The deeper inspiration came from the clinical literature. Studies like Gill et al. (2020) and the CDC's Joint ROM Study revealed something striking — the textbook "normal" values taught in medical schools (180° for shoulder abduction, 120° for hip flexion) are idealised maximums that most healthy adults never reach. Real normative data is age-stratified, sex-adjusted, and heavily context-dependent. No consumer health tool was using it. We decided to build one that did.
What it does
Our Mobility Assessment is a browser-based range of motion assessment tool that uses a standard webcam (no wearables, no clinic, no special hardware) to measure three clinically significant joints:
- Shoulder abduction — raising the arm laterally to overhead
- Hip flexion — lifting the knee toward the chest
- Lumbar forward flexion — bending forward at the lower back
For each joint, the user performs a guided movement in front of their camera. The app extracts a joint angle in real time and classifies the result as good, average, or needs improvement. Finally, an overall report is generated, and an AI-generated stretching routine is personalised to the user's specific deficit profile.
How we built it
We connected the device's camera to Tensorflow's BlazePose API to get the user's physical orientation and location. We then filtered through these landmarks to get the positions of the limbs that are important for each exercise. Then, we compare the locations to determine the angle between them. Then, the angle is given a category (good, average, or needs improvement) based on average data found online.
Challenges we ran into
Getting the user's location can be finicky at times and does not always work the way that we imagine it will. Also, creating persistent data for the user turned out to be a more difficult challenge than we expected it to be without the use of a database. Different sources also disagree on what a healthy range of motion is, so evaluating the angles proved more subjective than we expected.
What we learned
Pose estimation is remarkably capable — and surprisingly fragile. MediaPipe handles most body positions gracefully, but occlusion, camera angle, and clothing all degrade landmark confidence significantly. We learned to validate landmark visibility scores before accepting a reading, and to reject frames where confidence fell below a threshold of 0.7. LLM output quality is highly prompt-sensitive. Early versions of our stretching routine prompts produced generic, one-size-fits-all routines, whereas our prompts now produce an actual personalised program.
Built With
- blazepose
- groq
- next.js
- react
- typescript
Log in or sign up for Devpost to join the conversation.