Inspiration

After experiencing the stress of finals week, we decided to create an interactive, open-world peaceful zone that users could explore while learning breathing and relaxation techniques via a VR meditative experience.

What it does

walkspace allows the user to enter a calming forest environment, walk around, and interact with signs, which trigger a voiceover guideline to meditation. There are five signs, which lead the user through introductions to deep breathing, box breathing, the 4-7-8 stress breathing technique, deep muscle tension relaxation, and full body relaxation. Our goal was to make the experience as user-friendly as possible.

How we built it

We used Unity to develop our environment space with assets from the Unity Asset Store. We utilized C# scripts for the controls in the environment, and used Audacity with a 150 mA Blue Yeti microphone and Auphonix pop filter to record a series of voiceovers to lead the user's meditation.

Challenges we ran into

Most of us hadn't used Unity at all before this and none of us had worked with the HTC Vive or Blue Yeti before, so working with new software and hardware was the main challenge for us. We learned quickly that git merge conflicts are quite common when working on more than one computer at once, so that was also something that we had to balance.

Accomplishments that we're proud of

We're very proud of what we were able to learn and accomplish in a short period of time, and hope that our project will be able to provide stress relief to users in the future. It was also Alex and Caroline's first hackathon experience!

What we learned

We learned a lot about working with Unity and VR in general, and specifically working with the HTC Vive. Writing the scripts for and recording our audio voiceovers was also a very new experience for us.

What's next for walkspace

We'd like to expand walkspace in a variety of ways. We have the potential to create new relaxing environments and new audio scripts to provide users with more information about methods of stress relief. We also could potentially leverage data from outside sensors (ex. O2 sensors in phones or heartrate via fitbit) that could provide insight into the user's physical state, and edit the audio voiceovers that are available to the user according to that specification.

Built With

Share this project:

Updates