Inspiration
The earliest inspiration for the project was a news article one of us read about firefighters using virtual reality in order to practice for their jobs. From, there we thought of different ways virtual reality could be used to practice real world situations. We then attended the OpenBCI presentation in the Nvidia auditorium and developed the idea of using virtual reality to practice for stressful and uncomfortable situations. OpenBCI's technology was important in inspiring and enabling our idea because it allows for additional metrics to be tracked which allow for more accurate tracking of user stress, a statistic that influences our dynamic difficulty.
What it does
Our project uses virtual reality, artificial intelligence, and biometric sensing in order to create dynamically adjusted simulations of difficult real-world situations. Virtual reality is used to immerse the user in a more realistic virtual world where they can experience their situations in a safe environment. Then, artificial intelligence is utilized in order to take user speech and track factors about it such as if they are interrupting the speech of the artificial intelligence. This is taken into account in dynamically adjusting the difficulty of the situation. Artificial intelligence is also essential because the NPCs rely on it in order to respond to the player and they take in various metrics to become more or less friendly or "difficult" to the player. Some of the metrics that are taken in to adjust the NPCs are biometric information which comes from OpenBCI's Galea headset.
How we built it
In order to build Synaptically we utilized an OpenBCI Galea headset with a Varjo Aero attached. This allowed for us to take in biometric information and use it in a virtual reality world we created within the Unity engine. In Unity we were able to build scenes that matched real world environments and created NPC models to interact with users. Then we utilized several AI models in order to make our simulation work effectively. First, for the NPC models we used ElevenLabs for their speech. We also used Groq with Ollama in order to take in the user input and respond effectively.
Challenges we ran into
One of the biggest challenges we faced was dealing with Unity itself. The project kept crashing while we were testing, so we had to spend a lot of time debugging and figuring out exactly what was causing the issues. On top of that, setting up the Varjo headset in Unity came with its own set of difficulties. There was barely any documentation or clear guidance on how to integrate the headset properly, so a lot of the setup process involved trial and error. Even though it took time, working through these problems taught us a lot about troubleshooting and handling unfamiliar hardware in a complex development environment.
Accomplishments that we're proud of
We are extremely proud that we were able to work with a new and highly sophisticated piece of hardware, set it up in Unity, and use it to collect real-time data about the user. Being able to use that data to make real-time decisions in the user experience felt like a big milestone for us. We’re also proud that we managed to get the Varjo headset fully set up despite the limited documentation and the lack of clear articles about developing on the platform.
What we learned
We learned a lot about working with new hardware and integrating it into a functioning software system. Setting up the Varjo headset together with the OpenBCI Galea was a real challenge. It took a lot of troubleshooting and patience, but it taught us how to work with complex device pipelines. We also had to learn how to work with equipment that isn’t well documented, especially the Varjo headset. A lot of the setup process involved experimenting, testing different approaches, and piecing together information from scattered sources, which helped us get better at solving problems independently. Once everything was connected, we brought it all into Unity, which allowed us to combine the hardware and software into an actual working application. We ran into extra difficulties when we tried adding graph data at the end of each run. Creating that UI inside the Varjo environment was confusing because the documentation for building and displaying UI elements in XR was limited. Getting the graphs to appear correctly became its own learning experience, but it ultimately helped us understand the full pipeline from hardware setup all the way to user-facing features.
What's next for Synaptically
In the future, we would like to add a interview coach to help people better prepare for behavioral interviews. We would also like to add a coach mode which would allows coaches to get crucial data from their clients to better assist them in their communication endeavors. Finally, as a fun bonus, adding a boyfriend update, similar to our girlfriend feature would be a nice touch, and ensure our commitment to diversity.
Log in or sign up for Devpost to join the conversation.