About Rom-Com

Inspiration

  • 795,000 Americans suffer a stroke each year.
  • 70% lose use of their arm.
  • Of those who survive, only 31% access outpatient rehabilitation.

The barrier to stroke & traumatic brain injury (TBI) rehab isn't motivation, it's access. The cheapest existing rehabilitation systems are wildly expensive, and patients face 6-12 month waitlists at clinics.

We realized the infrastructure to build at-home stroke/TBI rehab already exists in webcams -- that infrastructure became Rom-Com.

What We Learned

Stroke and TBI rehab has a much deeper clinical infrastructure than we expected...

  • Fugl Meyer Assessment for Upper Extremity (FMA-UE)
  • Range of Motion (ROM) normalization
  • Systemic gaps in U.S. outpatient access

Even beyond clinical infrastructure, the technical architecture is incredibly nuanced...

  • EMA smoothing stabilizes noisy joint data
  • Mapping normalized ROM values to three.js visual feedback is harder than it sounds

And even non-technical contributors can drive REAL impact through agentic coding tools, research depth, and sponsor/mentor engagement. Realizing these individual strengths as well as the depth of the problem we attack from day ONE is what made this work.

How We Built Rom-Com

We used MediaPipe to process a webcam stream in real-time, extracting 33 body landmarks and 21 landmarks for each hand (@ 25-30 fps).

The landmarks feed into a Random Forest classifier trained on labeled movement data with ROM normalization calibrating each session to a user's individual baseline; thus, allowing the system to adapt to ANY level of mobility. Then, EMA smoothing and hysteresis clean the signal before its used for scoring.

We used FastAPI to handle WebSocket connections that stream live joint data to the frontend and exposing REST endpoints for session management and FMA-UE score generation.

On frontent, React and Three.js render real-time 3D exercise scenes driven by the WebSocket feed, with visual feedback that maps directly to the patient's normalized ROM values. Arduino Uno with buzzer + LED runs in parallel, translating movement milestones into multimodal feedback.

At the top of the patient-facing stack, HeyGen's LiveAvatar serves as a real-time AI companion that delivers spoken guidance and positive reinforcement as the session runs. Photon's iMessage API closes the loop between sessions, sending AI-generated daily reminders personalized to each patient's last session and streak. [MongoDB -- to be added.]

Challenges

  • Hardware Constraints ...sourcing Arduino components with limited supply led to creative workarounds
  • Real-time system complexity ...coordinating WebSocket, live pose estimation, and 3D visual feedback
  • ML reliability ...class imbalance in training data and noisy webcam input (led to SIGNIFICANT tuning)
  • Hackathon Format ...realizing a 36-hour format is fundamentally different than the standard 24 hours; this affected how we planned and designed our 36-hour build plan

Built With

Share this project:

Updates