Inspiration
We were inspired by how disconnected medical data can feel for patients and even clinicians. MRI scans, surgical video, and clinical knowledge all exist in silos, making it hard to build an intuitive, patient-centered understanding of what’s actually happening. We wanted to bridge that gap by making medical data interactive, explainable, and accessible in real time.
What it does
SurgeAI is a patient-centered surgical intelligence platform that converts MRI scans into interactive 3D anatomical models and augments live laparoscopic video with AI-powered overlays. It also includes a multimodal search–driven AI agent that can analyze imaging and answer natural language questions, providing clear, contextual explanations for both patients and clinicians. The system connects preoperative understanding with intraoperative guidance in a single unified experience.
How we built it
We built a pipeline that converts MRI data into 3D volumes, performs segmentation, and renders interactive models for exploration. For intraoperative support, we process live laparoscopic video and overlay AI-generated highlights, anatomical cues, and explanations in real time. On top of this, we integrated a multimodal AI agent capable of querying across images, 3D data, and text to deliver contextual insights through natural language interaction.
Challenges we ran into
One major challenge was handling medical imaging data correctly, including aligning slices and generating accurate 3D reconstructions. Real-time processing for surgical video required balancing latency with meaningful AI output. Another challenge was ensuring that multimodal understanding across 2D images, 3D volumes, and text remained coherent and useful rather than overwhelming.
Accomplishments that we're proud of
We successfully built a full end-to-end pipeline from raw MRI data to interactive 3D visualization and real-time surgical augmentation. We also developed a multimodal AI agent that can meaningfully interpret and explain complex medical data. Most importantly, we created a system that makes advanced medical insight more accessible and patient-centered.
What we learned
We learned how powerful multimodal AI can be when combining vision, language, and spatial data. We also gained a deeper understanding of the challenges in medical imaging, real-time systems, and human-centered design in healthcare. Building for patients requires clarity, trust, and explainability, not just technical accuracy.
What's next for SurgeAI
Next, we want to improve model accuracy and expand support for more imaging modalities and surgical procedures. We plan to refine the Vision Pro experience for immersive exploration and enhance the AI copilot with stronger real-time reasoning and personalization. Long term, we aim to validate the system clinically and move toward real-world deployment in hospitals.
Log in or sign up for Devpost to join the conversation.