Inspiration:
We wanted to make medical annotations actually useful during live imaging. In ultrasound, laparoscopy, and echo, the anatomy moves and the camera moves, static drawings quickly become misleading. HoloRay's challenge pushed us to build motion-tracked annotations that stay anchored to the anatomy, not the screen.
What it does:
HoloView lets clinicians organize medical imagery, add annotations and notes, and watch those annotations move with the underlying anatomy in real time. As frames stream in, the system tracks the annotated region and updates positions so marks stay aligned even during motion, occlusion, or camera shifts.
How we built it:
We treat the video as a frame stream and run a tracking pipeline on the regions clinicians annotate. The core uses feature matching and/or optical flow to estimate motion between frames, then updates annotation positions accordingly. We wrap that in a lightweight backend and a simple web UI so doctors can create, edit, and review annotations alongside their notes.
Challenges we ran into:
Real-time performance vs. stability was the biggest trade-off. We had to keep latency low while avoiding jitter and drift. Occlusions and out-of-frame motion were tricky, re-identifying the same anatomy on re-entry required careful tuning. We also had to make the system general across different modalities with very different textures and noise profiles.
Accomplishments that we're proud of:
We achieved stable, motion-tracked annotations that remain anchored through common probe and camera movements. The pipeline runs close to the source FPS and is robust enough to handle typical occlusions. We also integrated annotations with clinician workflows, notes and organization, so it feels like a real tool, not just a demo.
What we learned:
Small tracking errors accumulate quickly; stabilization and confidence gating matter as much as raw tracking accuracy. Medical imagery demands modality-aware tuning. And "real-time" is a product decision as much as a technical one, users feel the difference at even small latencies.
What's next for HoloView:
We plan to improve recovery after out-of-frame motion, add multi-modal presets, and support collaborative sessions over WebRTC. We also want to introduce model-assisted re-identification of anatomy and richer annotation types tailored to different clinical workflows.
Log in or sign up for Devpost to join the conversation.