Inspiration
Musculoskeletal conditions affect 1 in 2 adults in the United States, roughly 127 million people and over 50 million Americans seek physical therapy every year to recover from them. But most patients leave appointments with a vague sense of which muscles are actually affected and little to no guidance on what movements are safe at home. We built DeepMesh to change that: giving patients a clear, visual, and interactive perspective into their own anatomy, along with real-time guardrails to protect them during recovery.
What it does
3D UNet Segmentation A deep learning model trained on CT scans automatically predicts segmentation labels from raw scans, no manual annotation required.
Interactive muscle viewer Takes a labeled segmentation file and renders each muscle region as an interactive 3D mesh, explorable directly in the browser.
Real-time ROM tracking A camera-based injury prevention feature tracks range of motion live and warns patients if a movement could aggravate a pre-existing injury.
How we built it
3D Muscle Segmentation Pipeline
We trained a 3D UNet in PyTorch on labeled MRI volumes to classify each voxel into its corresponding muscle group - bicep, tricep, and humerus. The model outputs a labeled volume that feeds directly into the mesh generation layer. To ensure clean boundaries without scan artifacts, we apply a Gaussian filter via scipy before segmentation, preserving clinically relevant structure while suppressing noise. From the labeled volume, we generate smooth, watertight meshes using marching cubes from scikit-image, with nibabel handling NIfTI I/O. The resulting geometry is serialized as flat vertex and face arrays, ready to stream directly to the browser with no intermediate file format. Because muscle mesh data can be dense, we optimized the pipeline to prioritize surface geometry over volumetric data, expose only the labeled regions the patient cares about, and apply Laplacian smoothing to produce clean, readable shapes without overwhelming the renderer.
Interactive Muscle Viewer
The frontend is built with standard HTML, CSS, and JavaScript, keeping the interface lightweight and accessible. Mesh data returned from the API is passed directly into Plotly.js, which renders each muscle group as an independent 3D surface. Patients can toggle individual muscles on and off to isolate the tissues relevant to their recovery.
Real-Time Range-of-Motion Tracker
Live camera frames are captured via the Canvas API and sent to the backend at approximately 10fps. MediaPipe Pose extracts joint landmarks, from which we compute shoulder flexion, abduction, elbow flexion, and rotational angles in 3D space. Each angle is checked against personalised ROM thresholds derived from the patient's own scan geometry. When movement approaches a danger threshold, the system triggers an immediate visual warning in the browser overlay, no therapist required.
Backend & Deployment
The API is built with FastAPI, chosen for its async support and automatic schema validation via Pydantic. The entire system is containerized with Docker for reproducible, one-command deployment.
Challenges we ran into
Labeled CT scan datasets for musculoskeletal structures are scarce, which constrained the size and diversity of our training set. We also had to carefully balance mesh resolution against browser performance, high-fidelity meshes for every muscle group simultaneously push the limits of real-time rendering. On the ROM tracking side, ensuring robust joint detection across different body types, camera angles, and lighting conditions required significant tuning.
Accomplishments that we're proud of
- End-to-end pipeline from raw CT scan to interactive 3D muscle visualization, fully automated via our 3D UNet model
- Per-muscle-group mesh rendering that patients can explore and isolate directly in the browser
- Real-time, camera-based range-of-motion warnings that work without specialized hardware
- Clean, containerized architecture that makes the full stack reproducible and deployable
What we learned
Building for patients, not clinicians forced us to constantly ask whether each feature was genuinely useful or just technically interesting. We learned how to process volumetric medical data into clean, artifact-free meshes, how to optimize 3D rendering for the web, and how to make pose estimation robust enough for a real-world home rehab setting.
What's next for DeepMesh
Expand the segmentation model to additional body regions beyond the lower and upper extremities Introduce 4D reconstruction by incorporating time-series scan data to visualize muscle movement over time Add personalized rehab progress tracking, logging ROM measurements over weeks so patients can see their recovery arc Explore integration with wearable sensors for more precise joint angle measurements in real-world settings
Log in or sign up for Devpost to join the conversation.