Inspiration

CeMoQu (Cerebellum Motion Quantitative) was inspired by a real-world story about a patient with ataxia who experienced worsening symptoms while waiting for a follow-up clinical appointment that could not be scheduled in time. This highlighted a critical gap in neurological care: patients with progressive conditions often go weeks or months without objective reassessment, even though subtle changes in motor function may occur much earlier.

This made me realize that a key limitation in current clinical practice is not the lack of diagnostic tools, but the lack of frequent, accessible, and objective monitoring between visits. CeMoQu was created to address this gap by digitizing a core component of the SARA neurological assessment and making it usable from anywhere with a webcam.

What it does

CeMoQu is a zero-install, browser-native clinical decision-support tool that automates the SARA (Scale for the Assessment and Rating of Ataxia) sitting balance test using on-device AI.

From a standard webcam, it:

  • Runs Google BlazePose (a state-of-the-art CNN) entirely in-browser via WebAssembly — no data ever leaves the device
  • Tracks the bilateral shoulder midpoint as a trunk position proxy at ~30 FPS
  • Converts pixel-level motion into calibrated centimeter-scale sway metrics (max displacement, RMS, sway duty cycle, inter-frame velocity)
  • Applies a deterministic, evidence-based SARA scoring engine (0–4) modeled directly on the original 2006 Schmitz-Hübsch publication
  • Generates an AI-powered clinical narrative interpreting the biomechanical findings
  • Exports research-grade data: per-frame CSV, sway trajectory PNG, and score history charts

In under 10 seconds of assessment time, CeMoQu produces a structured clinical score with a fully auditable decision trace — making it repeatable, explainable, and accessible outside the clinic.

How I built it

CeMoQu is a fully client-side web application — no server, no database, no deployment pipeline.

ML Inference Layer
The core is MediaPipe BlazePose, a two-stage CNN that detects 33 anatomical landmarks with sub-pixel precision. It's compiled to WebAssembly and optionally GPU-accelerated via WebGL, running natively inside the browser at real-time frame rates.

Biomechanical Feature Extraction
Raw landmark streams are processed into clinically meaningful postural metrics. The shoulder midpoint M = (L₁₁ + L₁₂) / 2 — the centroid of landmarks 11 and 12 — is transformed from normalized image coordinates to real-world centimeters using a pixel-per-cm (PPC) calibration derived from the patient's shoulder width. From this, it extracts:

$$\text{Max sway} = \max(|p_i - \mu|), \quad \text{Duty cycle} = \frac{\text{frames with } |p_i - \mu| > 1\text{cm}}{N}$$

SARA Scoring Engine
Rather than threshold-based approximations, the scoring logic is a deterministic decision tree faithfully implementing the clinical rubric — distinguishing intermittent vs. constant sway, detecting support events via single-frame velocity spikes (≥ 8 cm), and handling tracking loss gracefully.

AI Clinical Summary
Claude generates a structured neurological interpretation of the extracted metrics, bridging the gap between raw sway data and clinical meaning.

Visualization & Export
Chart.js renders real-time sway trajectories and longitudinal score history. Blob API enables zero-server CSV and PNG export for research workflows.

Challenges I ran into

Turning pixels into centimeters. Webcam pixel displacement is device- and distance-dependent. Designing a robust calibration system using anatomical shoulder width as a physical reference — and making it accurate enough for clinical-grade measurement — required significant iteration.

Encoding clinical judgment in code. SARA scoring isn't a simple lookup table. It relies on qualitative distinctions (intermittent vs. constant, needing support vs. not) that required careful translation into deterministic, auditable logic without losing clinical fidelity.

Real-time constraints in the browser. Coordinating ML inference, signal processing, scoring, and rendering — all within a single requestAnimationFrame loop — while maintaining smooth 30 FPS performance required careful architectural separation across the JS module system.

Accomplishments that I'm proud of

  • A faithful, explainable implementation of SARA Item 3 scoring — not a simplified proxy
  • Full on-device ML inference with zero network calls, preserving patient privacy by design
  • A calibrated biomechanical pipeline that converts webcam motion into clinically meaningful physical units
  • Research-grade data export making outputs usable in validation and longitudinal studies
  • A clean, modular codebase (8 JS modules with clear separation of concerns) that could serve as a foundation for additional SARA subtests

What I learned

Through building CeMoQu, I learned how to combine computer vision, clinical reasoning, and generative AI into a unified system.

I gained experience in:

  • Working with real-time pose estimation models for human movement tracking
  • Designing calibration methods to convert pixel data into physical measurements
  • Translating medical scoring systems into deterministic algorithms
  • Using AI not just for prediction, but for generating meaningful clinical interpretation
  • Building low-latency, fully client-side applications with real-time performance constraints

Most importantly, I learned how AI systems can be designed to support clinical decision-making rather than replace it, by making data more interpretable and accessible.

What's next for CeMoQu — Digital SARA Assessment for Ataxia

  • Expand SARA coverage — gait, stance, finger-nose, and heel-shin subtests
  • Longitudinal tracking dashboard for visualizing progression over weeks and months
  • Clinician portal for remote patient monitoring with secure score sharing
  • Validation study comparing CeMoQu outputs against in-clinic SARA assessments
  • Improved calibration for varied environments (lighting, camera angles, distances) Ultimately, my goal is to enable continuous, accessible neurological monitoring outside the clinic, helping clinicians detect progression earlier and improve care for patients with ataxia.

Built With

Share this project:

Updates