StrainSense

Inspiration

We were inspired by a problem that feels small at first and then slowly takes over people’s lives: bad posture, tight hips, forward head posture, unstable knees, and movement habits that most people can feel, but cannot actually see or correct.

Students, desk workers, gym-goers, and active people all run into the same wall. They know something feels off, but they don’t know:

  • what the issue actually is
  • what it may lead to
  • whether they’re making it worse while trying to fix it

That gap felt deeply human.

Pain doesn’t arrive as a single dramatic moment — it accumulates over time:

$$ \text{Strain} \approx \sum (\text{small misalignments} \times \text{repetition}) $$

We wanted to build something that could catch those signals early, translate them into plain language, and give people a clear next step before strain becomes injury.


What it does

StrainSense guides the user through a short camera-based movement assessment, including stance and movement checks, then analyzes their body in real time using pose landmarks.

It detects posture and movement patterns such as:

  • rounded shoulders
  • forward head posture
  • anterior pelvic tilt
  • knee valgus
  • lateral asymmetry
  • thoracic kyphosis
  • neck flexion

Output

After the scan, StrainSense generates:

  • a body strain map
  • annotated evidence frames
  • severity and confidence labels
  • personalized drill recommendations
  • a population comparison view
  • a weekly exercise plan

Live coaching

Users can enter a live coaching mode where StrainSense:

  • watches movement in real time
  • overlays alignment visuals
  • surfaces high-priority correction cues
  • warns about unsafe form
  • speaks coaching cues aloud
  • shows looping demo videos for drills

Core system loop

scan → analyze → prescribe → coach

How we built it

StrainSense is a browser-based application built with:

  • React
  • Vite
  • TypeScript
  • MediaPipe Pose

We use getUserMedia for live webcam input and built a guided assessment pipeline for:

  • camera setup
  • visibility checks
  • step-by-step capture
  • frame selection

Metrics engine

We transform pose landmarks into biomechanical signals:

$$ \text{metrics} = f(\text{pose landmarks}) $$

Examples include:

  • shoulder height differences
  • head-forward offset
  • thoracic angle
  • knee tracking deviation
  • asymmetry scores

Challenges

Reliability

Real-world input is unstable:

  • lighting changes
  • camera angle
  • occlusion
  • landmark jitter

Interpretation

Turning pose data into meaningful, safe feedback required careful design.


Real-time coaching

Balancing responsiveness and clarity was difficult across visuals and voice feedback.


Accomplishments

  • fully working end-to-end system
  • real-time movement coaching
  • intuitive user experience
  • ethical considerations built into the product

What we learned

$$ \text{trust} = \text{clarity} + \text{transparency} + \text{consistency} $$


What’s next

Clinical validation

Work with physiotherapists and movement experts

Progress tracking

$$ \Delta \text{metrics} = \text{metrics}_{t+1} - \text{metrics}_t $$

Coaching improvements

  • adaptive cueing
  • drill expansion
  • mobile support

Hackathon fit

Impact

Targets a widespread physical health issue

Technical execution

Real-time CV + coaching system

Ethics

No diagnosis claims, no default storage

Presentation

Highly visual and demo-friendly

Share this project:

Updates