Inspiration

I found Dr. Sach's Lab while looking at a poster with the school's motto: "Defy the Conventional". The poster showed Dr. Sachs performing a deep brain stimulation (DBS) surgery with the caption "Virtual Reality in the Operating Room". I'd been wanting to do a VR project, and what better way than in the medical field, somewhere I'm really interested in applying my tech skills in the future? So, I reached out to the lab and was offered a position to develop a VR project while holding an Undergraduate Research Opportunity Project (UROP) Scholarship. This scholarship required me to do a poster presentation after six months, so I designed a pilot trial for the VR environment on healthy patients, tested them in the environment, and ran MATLAB and Python analyses on the output data to present at the UROP Symposium in March 2017.

What it does

When selecting to run the UROP trials, one of two VR environments will appear.

First task (simple):

“BlueSky” task, shows three equidistant, white target spheres aligned horizontally in front of the subject.

  • Preparation cue: after the subject places their VR controller, rendered as a grey sphere, over the central target, one of the left or right targets undergoes a discrete colour change from white to green, becoming activated.
  • Imperative cue: three seconds later, an auditory stimulus is presented and the subject moves their controller to cover the activated target within a five-second period to succeed in the trial.

Second task (Enriched):

“Workshop” task, renders the same targets and controller along with a room that imitates a “Santa’s Workshop” environment and plays thematic music.

  • Preparation cue: identical to BlueSky task.
  • Imperative cue: identical to BlueSky task except that auditory stimulus is paired with a discrete transformation of the grey sphere into one of three meshes shaped like a teddy bear, a candy cane, or a star, but contain the same collider shape as the grey sphere.

The purpose of creating these two tasks was to see if there were any differences in performance of the centre-out tasks by adding distracting visual or auditory stimuli to the trials.

How I built it

Initially, the prototype code was written in C++ using Unreal Engine, so I decided to apply the paradigms used in the prototype to the Unity Game Engine, primarily because there was a lot more community support, and also because I had a strong background already in Java and wanted to apply my skills to a real project rather than algorithm problems for once. So, the scripts were written in C# and programmed in Visual Studio. I built the project using SteamVR and ran it using the HTC Vive, complete with the head-mounted display (HMD) and a handheld controller.

I had to also get several streams of data for my research analysis: from the times spent in each game state, the Vive controller movement patterns, and the EEG cap that was picking up signals as the patients performed the tasks. I used this library from a fellow researcher and programmer to stream data out of the game, and mapped time points for controller position in space and EEG signals to the times when the game state changed. I calculated average reaction times and plotted average movement clouds per trial using MATLAB scripts I wrote and put these data into my final research poster.

Challenges I ran into

There was a lot of physics to be learned that comes with figuring out how computer graphics work in a 3D space. It was not too difficult to build or find assets from the Unity Community, but when it came to orienting the objects in space, I learned a lot about vectors in three-dimensional space as well as yaw, pitch, and roll. Since the patients are lying on an operating table at about a 45 degree angle, the targets had to always be a certain maximum distance away from the HMD, and they would have to be reset if the patient got tired from holding their arm in the air while performing the tasks. However, after brushing up on the physics, I ended up putting all my targets on a single parent plane that I always had to set up so that its face faced the HMD.

For the enriched task, I quickly realized that since I had conveyor belts that needed to carry falling objects away, angling them to always flow away parallel to the HMD would mean that the inherent weight of the objects would pull them toward the patient. Plus, it wouldn't make sense for these objects to convey upwards since the room still had obey simple gravitational principles to appear realistic. I had to take the conveyor belt class that I'd gotten from the asset store and tweak it so that upon reset (the space bar), the pitch and roll would remain the same and only the yaw would change so that it always matched that of the HMD. I abstracted this function so that I could pass in any of the three prime orientation vectors so it would only change desired orientations.

Accomplishments that I'm proud of

I finally made a responsive VR environment! Having had a little bit of 2D game design experience, I had a good foundation to start working on the tasks. It was very satisfying to add or change an element, run the project, and put on the HMD to see the effect in VR. I was also very happy to apply my technical skills to the healthcare sector, which I feel will benefit a lot from technology over the next few years. VR has the academic advantage that near every single perceived auditory and visual cue can be controlled or recorded, and this results in a very high degree of accuracy when it comes to analyzing patient data, both scientific and clinical.

What I learned

I opened my eyes to working with computer graphics, which was difficult because I had no 3D modelling background but exciting because I got to apply physics to a real-life problem, something I hadn't done practically since high school. I was also able to build a state machine by breaking down all the stages of a single trial, from the preparation cue to the imperative cue, to the feedback cue to the reset cue.

In terms of the study, the resulting data concluded an insignificant difference across the trial patients between the simple and the enriched tasks; however, most of the patients reported a higher engagement in the enriched tasks, meaning that by implementing a version of the enriched task in the operating room, the lab could alleviate some stress of the Parkinson's patient undergoing the DBS surgery by enjoying the tasks more.

What's next for NeuroVR Target Practice

I'm currently on a hiatus from the lab, so project supervisor has continued work on NeuroVR Target Practice. I will probably go back in the near future to continue work on the project so that it can be fully adopted by the Parkinson's and other neurological studies. The repo is current private under https://github.com/SachsLab but we hope to make this project public so that other labs can use these VR environments for similar studies.

Built With

Share this project:

Updates