Based on our research, we found out that 5.3 million Americans suffer from a form of social anxiety, 3.2 million Americans fear crowded or public places, and 74% of people suffer from speech related anxiety (National Institute of Mental Health, 2016). Unfortunately, 70% of Americans agree that giving presentations is an important part of their work. We came across a study showing the effects of psychological preparation on the physical symptoms of anxiety, and chose to build our project around that method for anxiety-reduction (Mott, 1999).
What it does
As a result, we chose to design a program where users could go to the location where they will be presenting, or one similar to it, and practice their presentations using a series of visual aids. We are aware that many of the popular visual aids currently used, such as notes written on index cards or a portable device, are often used as a crutch by the presenter, who will usually stare at the aforementioned notes in an attempt to avoid looking at the audience. In contrast, we chose to develop for the Hololens to help the user become accustomed to looking out into the audience while presenting.
How we built it
We created a HoloLens app in using Unity and Visual Basic. The 3D models were created in AutoDesk.
Challenges we ran into
We ran into a number of challenges over the course of the event. One of our team members was unable to participate due to a delay in the approval of his travel visa. We also had to contend with slow download speeds and difficulties related to unsupported software versions.
Accomplishments that we're proud of
We managed to create the 3D models and add basic functionality to the application in less than 24 hours. All of our team members will leave this event with newfound knowledge of Unity and Visual Basic, as well as a complete HoloLens application.
What we learned
We learned how to create a fully-functioning HoloLens application in Unity and Visual Basic.
What's next for HoloPitch
We would like to add a new feature. Ideally, once the presentation is completed, the audio of the presentation will be run through Azure’s cognitive services, turning the speech into text, and allowing us to analyze the content of the presentation.