Inspiration

After receiving the prompt, we immediately agreed that VR would allow us to create a calming, visually stunning experience that could greatly impact the user's wellbeing. As we continued to think about how to differentiate ourselves from the existing VR apps out there, we realized that most news outlets spit out polarizing information that often promotes angst, uncertainty, feelings of inadequacy, and more. We wanted to turn this around by creating a Good-News VR experience. We wanted to offer an experience where users can be soothed and relaxed while continuing to learn more about the positive side of the world. Ultimately, we came in with the vision to flip the script on how news should be consumed by providing a visually stunning yet calming experience.

What it does

The app has two main alternating components. The first component is what we call the loading screen/precursor. We took inspiration from well-established relaxation practices such as box breathing and positive affirmations. During the first component, we implemented a AI VoiceOver (thanks to ElevenLabs) and added soft, ambient music to further enhance the user experience. This process really allows for the user to settle in and soothe their body. After this portion completes, we switch to the component that displays a good news headline. Underneath this headline/snippet of news is more information regarding the headline, typically 2-3 extra lines. The audio VoiceOver for this is actually generated on the fly for the dynamically sourced AI content. We wanted to ensure that the user does not get overwhelmed, but still continues to learn about relevant developments across the world. In both components, the app is designed primarily to promote a relaxing and tranquil environment.

How we built it

For rendering to the vr headset, we chose to use the Unity game engine. For the backend, we had a python FastAPI that connected to a supabase cloud database, Gemini API, and Eleven Labs for TTS (text to speech). On the backend side, we first created an api endpoint that fetches news snippets from the database, takes those news snippets and performs context engineering on the Gemini-Flash model. From the detailed + extensive prompt, we return a JSON response to the client side. This get request is called by the unity engine and the response is then modified. For the ElevenLabs/TTS we created another endpoint, a dynamic text-to-speech pipeline that tailors to a calming experience. Additionally, we dealt with TTS API constraints by creating an effective audio caching system, enhancing both responsiveness and usability. With all the endpoints set up, we then got our hands dirty within the Unity Engine and created the virtual environment. We elected to use a fast, lightweight render pipeline to improve accessibility for users without high-end headsets, and relied primarily on custom shaders to build the visual experience. In the end, this created a unique VR application that has the ability to create lasting impacts on the user.

Challenges we ran into

The development process certainly wasn't seamless! There was quite a bit of debugging and researching on how to use these different frameworks and tools concurrently. The first main problem we ran into was querying from the cloud database and ensuring that the gemini api was spitting out the right json format. After debugging and ensuring that all the API keys and configurations were setup correctly, the pipeline to retrieve the JSON became much more robust. On the client side, we experienced some difficulties in integrating Unity with Github, as well as efficiently previewing code on the headset.

Accomplishments that we're proud of

Throughout this hackathon, we faced a lot of adversity when it came to design decisions, debugging, and ensuring that the configuration between all the frameworks and tools was setup correctly. As I am typing this, it is currently 7:39 am and we haven't slept yet. We have put in plenty lot of effort to ensure that we have created a deliverable that others could be just as excited about as we are. We have also taken pride in the innovative/creative/outside the box shenanigans we have had to pull in order to make sure that this product worked. For example, we first worried about the high latency because of all the api requests, but we realized that our experience was meant to take things slow and that this latency could be justified–the genesis of the initial meditation phase.

What we learned

I think we all got a glimpse into what sacrifice is. We have lost sleep, prioritized discomfort over comfort, and been staring at our screens for an extended period of time. Team member Ben consumed 450mg of caffeine, despite never having had an energy drink or coffee before. In terms of a health, is this good for us? Probably not. But the reward of being able to take an ambitious vision and forge it into something tangible is an experience like no other, making the sacrifice ultimately worth it. I think we have truly learned that in order to take that leap and be ambitious, sacrifice is going to be necessary, whether you like it or not. On the technical side, we learned more on how to debug efficiently, learn about new tools/frameworks, and collaborate with each other on git/github. Overall, I think we have walked away from this hackathon not only with improved technical skills, but improved interpersonal skills that will be priceless moving forward.

What's next for Good-News-VR

We believe that this application, in a more complete form, could provide life-changing benefits to many users. In time, we intend continue with this and potentially make it a marketable app on the Meta Store. Regardless of what happens, we want to continue to push the agenda of spreading positivity and continuous learning.

Built With

Share this project:

Updates