Inspiration
The inspiration for SurgeVue came from the high risks associated with brain surgeries, where precision is critical and the margin for error is minimal. With a mortality rate that increases 15-fold for non-elective neurosurgeries, we were motivated to create a solution that leverages augmented reality (AR) and machine learning to enhance the precision of brain tumor removal surgeries. By merging real-time data with surgical tools, we aimed to provide surgeons with an advanced assistant to reduce risks and save lives.
What it does
SurgeVue is an AR-powered surgical assistant that provides neurosurgeons with real-time visual overlays during brain surgery. Using machine learning, the system outlines tumors on the patient's brain and classifies the presence of foreign objects. Integrated with an Arduino gyroscope for hand tracking, SurgeVue offers surgeons real-time feedback on tool movements, sweat sensor data, and critical hand movement trends—all displayed within a secure mobile app that ensures patient data privacy using facial recognition and RFID technology. The system empowers surgeons to make more informed, precise decisions during the most delicate procedures.
How we built it
We built SurgeVue using a combination of cutting-edge technologies:
- OpenCV for real-time tumor detection and hand movement detection for augmented view.
- PyTorch for classifying tumors.
- Swift and SceneKit to create an immersive AR environment that overlays tumor outlines onto the surgeon's view.
- Arduino gyroscope for tracking the surgeon's hand movements and tool positioning.
- PropelAuth to ensure secure access to sensitive patient data via facial recognition and RFID.
- Flask backend to process machine learning models and serve image classification results via API.
- Mobile App that visualizes gyroscope, sweat sensor, and hand movement trends.
Challenges we ran into
One of the biggest challenges was ensuring that the AR overlay, tumor detection, and hand-tracking happened in real-time without latency. We had to optimize our models to ensure seamless performance in the fast-paced environment of an operating room. Integrating the hardware components like the Arduino gyroscope and managing precise hand-tracking also posed challenges, as did creating a user-friendly interface that was informative without being overwhelming during a surgery.
Accomplishments that we're proud of
- Successfully implementing real-time AR overlays that provide surgeons with critical information at a glance.
- Developing a machine learning model that accurately classifies tumors and detects foreign objects.
- Integrating hardware sensors (gyroscope, sweat sensors) to provide surgeons with hand movement insights, enhancing precision during surgeries.
- Ensuring patient data security through advanced authentication measures like facial recognition and RFID.
What we learned
We learned how to combine AR and machine learning into a cohesive solution that can operate in real-time under intense conditions like surgery. We also gained experience in integrating hardware components, optimizing machine learning models for low latency, and handling large datasets like medical imaging. Furthermore we can save OR nurses and surgeons from intense radiation from the Medtronic devices, ones that prevent them from continuing operation. Additionally, building an intuitive, non-intrusive interface for surgeons highlighted the importance of user-centered design in healthcare applications.
What's next for SurgeVue
Next, we plan to:
- Refine the Machine Learning Model: Enhance tumor classification accuracy and expand it to detect other conditions and anomalies.
- Clinical Trials: Test SurgeVue in real-world surgical settings and gather feedback from neurosurgeons.
- Tool Tracking: Further refine the hand-tracking and integrate more advanced surgical tools into the AR environment.
- Global Expansion: Implement support for other AR platforms like Hololens and explore expanding the use of the system in other complex surgeries beyond neurosurgery.
- 3D Implementation: Create 3D models of the brain for the surgeon to interact with in real-time


Log in or sign up for Devpost to join the conversation.