One challenge that is frequently attributed to the prominence of technology in our lives is a deepening health crises. Gadgets are blamed for all sorts of problems, ranging from loneliness and depression to incidents of public violence. We felt that it would be incredible if we could provide a concrete use case for one of the most powerful tech experiences, VR, that would help seriously establish the technology as a force for good.
Our technology provides two options for mental health services inspired by psychological treatments. In one case, we built a scene to help people get over arachnophobia in an encounter inspired by exposure therapy. In another, we take advantage of VR’s privacy to create a space for mediation. We used the Oculus Quest VR headset for this platform, in light of its popularity and accessibility.
Our two person team approached the workflow in the following manner: Austin built the models in Blender while Ines assembled them in Unity and coded the interactive aspects of the app. We relied very minimally on the asset store, only using it for a couch and chair model, which saved us significant time.
We aspired to create an application that felt accessible to all users, and we drew from a variety of inspirations to accomplish this. We felt that a warm, cozy cabin provided a safe home screen for users that they would enjoy spending time in. Our meditation scene was inspired by the serenity of Morikami Park in Fort Lauderdale, which we sought to replicate in virtual reality. Our design for the phobia treatment was strongly influenced from conversations we had earlier conducted with Ryley Mancine, a researcher at Michigan State University in psychiatric science, on VR in psychological services. His robust knowledge of psychiatry was an invaluable asset in the way we thought about this project.
Unfortunately, the main challenge we faced was what we found to be Oculus Quest’s limited feature set. We had initially imagined incorporating a multi-party aspect to the app, but our inability to use Google cloud services with the Quest made it significantly more difficult. In the end we did not manage to incorporate a multi-party experience for this reason, which was a bummer. We also found that the Quest itself included buggy template scenes which made it difficult to learn how to take full advantage of its functionality. This, coupled with the lack of documentation, made for a disproportionately large time investment to learn relatively simple features, which negatively impacted our workflow.
Having said that, we managed to build a VR app on an emerging platform In only two days. We managed to overcome the compatibility issues, the buggy templates and lack of documentation to create something which could really help somebody. To our knowledge, we count ourselves among the first developing for the med-tech space as it pertains to virtual reality. And we did that with a team of two.
During these two days, we learned a lot about designing for VR. We learned how the Quest interacts with Unity and Blender, how to build scripts for VR from the ground up, and how to optimize model design for the Quest’s constraints. We also learned how to better manage our workflow to maximize production in this extremely time sensitive environment. Lastly, we learned more about pitch construction and marketing technology. HackHarvard has been an extremely positive experience in this respect and we feel invigorated by our experience here.
We think that as we move forward with this technology, we will go back to the drawing board and try to find workarounds for Quest development that will allow us to incorporate more robust features in VR apps. We will certainly further optimize our models (when we can spend weeks instead of hours on them), and we will find better ways to incorporate established practices into virtual use cases.