Every year, over 80 million CT scans are performed in the U.S. alone. Adding in MRI and other types of imaging, the number is even higher. This data is fundamentally in 3D, but is instead consumed by physicians as 2D, black and white images. To understand complex internal structures, doctors must flip through dozens or even hundreds of images in order to create a mental 3D reconstruction. This leads to a suboptimal understanding of patients’ physiology and thus poor outcomes.

HyperViz is a next-generation virtual reality medical visualization app that empowers physicians to better prepare for surgical procedures. We process 2D images like CT scans into 3D models, then visualize them in a VR environment, integrated with a photogrammetric scan of the patient. HyperViz segments the anatomical data to show different types of tissue, like skin, soft tissue, and bone.

We built the app around a hand-tracking interface to maximize ease of use and enable physicians to rely on HyperViz without cumbersome controllers.

Team Members

Ashish Bakshi, Eric Tao, Gabriel Santa-Maria, Beste Aydin (Team 14)


From day 2 onwards, we worked on the 3rd floor of the MIT Media Lab (Building E14), at table 3A-10.


Platform: Windows 10 / HP Reverb Pro / Windows Mixed Reality + Leap Motion Development Tools: Unity 2018.4.6f1, Microsoft Visual Studio 2019, Github SDKs: Mixed Reality Toolkit v2.0, Leap Motion SDK, Windows SDK Assets: Ashish Bakshi; Lazaroe Paid Assets: None

Built With

Share this project: