Inspiration
Home physical therapy has a 50% dropout rate, largely because patients exercise without feedback, lose motivation, and lack clinical-grade oversight. Traditional PT costs 150–250/session with limited insurance coverage, creating a gap where patients try to self-manage recovery after surgery or injury, often incorrectly.
Opennote Planning: https://easy-berry-9319.opennote.space/ccd3cbc6-a266-49da-a244-2a2ef53830d0
We wanted to replicate the "eyes-on" supervision a therapist provides by building a real-time computer vision system that tracks joint kinematics, detects compensation patterns, and delivers clinical feedback, all from a standard webcam.
What it does
KinetIQ is a webcam-based rehabilitation supervision platform with two user roles:
Patient App
- Real-time pose estimation tracking 33 body landmarks at 30fps via MediaPipe PoseLandmarker
- Exercise-specific kinematic analysis: joint angle calculation using 3-point inverse kinematics (atan2-based angle computation)
- Bilateral asymmetry index: compares left vs. right limb ROM to detect compensatory patterns
- Fatigue detection: variance analysis over a sliding window of landmark velocity
- Form correction: error classification (overextension, knee valgus, trunk lean, pelvic drop) with timestamped logging
- AI-generated session summaries and error coaching via LLM integration
Therapist Dashboard
- Cohort view with longitudinal asymmetry trend charts (Recharts)
- Session-by-session rep quality breakdown (correct / compensated / incomplete stacked bar)
- Clinical annotation system with flagging for review
- Exercise prescription management with target ROM and frequency
- Automated progression triggers (e.g., "advance phase when asymmetry < 10% for 3 sessions")
How we built it
Frontend Stack
- React 18 + Vite + TypeScript: Component architecture with SPA routing via React Router v6
- Tailwind CSS + shadcn/ui: Utility-first styling with Radix UI primitives for accessible components
- Framer Motion: Choreographed animations for session flow and metric transitions
- Recharts: Time-series visualization for asymmetry trends, quality breakdown, fatigue onset
Computer Vision Pipeline
- MediaPipe PoseLandmarker (
@mediapipe/tasks-vision): 33-landmark pose detection using thepose_landmarker_heavymodel with GPU delegate
Landmark processing:
- Exponential Moving Average (EMA) smoothing with α=0.3 to reduce jitter
- Per-exercise primary landmark highlighting (e.g.,
[11,12,13,14,15,16,23,24]for shoulder exercises) - Real-time angle arc rendering on canvas with degree labels
- Rep counting state machine: Phase-based detection (up/down) with angle thresholds per exercise
- Fatigue detection: Variance calculation over 30-frame window of joint angles — triggers when σ² > 120
Joint Angle Calculation
function calculateAngle(a, b, c) {
const radians = Math.atan2(c.y - b.y, c.x - b.x) - Math.atan2(a.y - b.y, a.x - b.x);
let angle = Math.abs((radians * 180) / Math.PI);
if (angle > 180) angle = 360 - angle;
return angle;
}
Bilateral Asymmetry Index
asymmetryIndex = Math.abs(rightAngle - leftAngle) / ((rightAngle + leftAngle) / 2) * 100
This metric is the clinical gold standard for return-to-activity clearance (target: <10%).
Backend Architecture
Supabase (PostgreSQL): 9 tables including
session_results,pain_check_ins,clinical_annotations,exercise_prescriptions,progression_triggers,therapist_patients,user_rolesRow-Level Security (RLS):
has_role()SECURITY DEFINER function for role-based accessSupabase Auth: JWT-based authentication with role enum
patient | therapist
Edge Functions (Deno)
| Function | Purpose | AI Model |
|---|---|---|
| session-insight | Post-session clinical summary | Qwen2.5-7B via Featherless |
| error-coaching | Real-time form correction explanation | Qwen2.5-7B |
| daily-tip | Personalized coaching based on session history | Qwen2.5-7B |
| progress-analysis | Longitudinal trajectory analysis | Qwen2.5-7B |
| exercise-recommendation | Next-exercise prescription based on history | Qwen2.5-7B |
| correct-form-image | AI-generated illustration of correct form | Gemini 2.5 Flash (Lovable AI) |
Session Recording
- Canvas-based recording via MediaRecorder API
- Format: WebM VP9 @ 800kbps
- Blob storage with URL generation for playback
Challenges we ran into
1. Landmark drift at extreme ROM
At high joint angles (>120°), MediaPipe landmarks occasionally jumped. Solved with EMA smoothing and bilateral validation — if one limb's angle is physiologically implausible, we fall back to the other.
2. Rep counting reliability
Simple threshold crossing caused double-counting on slow reps.
Implemented a state machine with cooldown frames (lastErrorFrameRef.current) and hysteresis zones.
3. Exercise-specific error detection
Each exercise has different failure modes:
- Shoulder lateral raise → trunk lean
- Squat → knee valgus
- Step-up → pelvic drop
Built a per-exercise error classification system with anatomical landmark checks:
// Pelvic drop detection for step-ups
const pelvicDrop = Math.abs(lHip.y - rHip.y);
if (pelvicDrop > 0.04) triggerError("Hip drop — engage your glutes");
4. LLM response consistency
Open-source LLMs sometimes returned malformed JSON.
Implemented fallback parsing with graceful degradation to raw text coaching.
5. Therapist adoption
Therapists need clinical-grade data, not consumer metrics.
Added:
- asymmetry trend charts
- compensation pattern logging
- progression triggers mirroring clinical decision-making
Accomplishments that we're proud of
- Sub-degree precision on joint angles via EMA smoothing at 30fps
- 7 deployed Edge Functions providing real-time AI coaching without client-side model inference
- Bilateral asymmetry tracking — the metric orthopedic surgeons actually use for clearance
- Dual-role architecture with RLS-protected data separation between patients and therapists
- Zero-config AI integration via Lovable AI gateway (no API keys required for core features)
- Session recording with canvas capture for therapist review
What we learned
- Asymmetry > raw ROM: Therapists care less about absolute angles and more about left-right symmetry
- Fatigue detection from velocity variance is more predictive than rep count alone
- Landmark visibility scores matter: MediaPipe returns confidence per landmark — we filter below 0.3
- EMA with α=0.3 is the sweet spot between responsiveness and jitter reduction
- Structured JSON prompts with strict schemas prevent LLM hallucination in clinical context
What's next for KinetIQ
- 3D pose estimation: Current 2D pipeline struggles with overhead/lateral exercises — exploring MediaPipe Holistic
- Wearable IMU integration: Ground-truth validation for angle measurements
- Predictive pain modeling: Currently reactive — want to predict flare-ups from fatigue/asymmetry trends
- Multi-clinic portal: HIPAA-compliant therapist management across organizations
- Mobile PWA: Offline-capable session capture with sync-on-reconnect
- Therapist video annotation: Overlay landmarks on recorded sessions for patient education
Built With
- deno
- featherless
- framer
- lovablecloud
- lucide
- mediapipe
- opennote
- plpgsql
- postcss
- radix
- react
- shadcn
- supabase
- tailwind
- typescript
- vite
- zod


Log in or sign up for Devpost to join the conversation.