Inspiration

This inspiration first came from the theme idea itself which is of Explore . We realized that these days thereat Many spaces where we can dynamically hangout with our friends on the internet, and if there are they do not get updated until the company itself release until they update and they are the same each time. With the power of ai we wanted to users to dynamically create new worlds that they could explore with their friends. Help people struggling with depression to engage in connecting with others by not only entertaining but also building meaningful relationships.

What it does

Allows the user to create their own virtual reality by describing it in an input box, and then Chat-GPT creates it! The user is free to explore the world by themselves or with friends.

How we built it

We used A-frame to create webVR experiences, we used jinja templating to dynamically update the scene and we deployed the fast api on grok. We used WebRTC for real time collaboration and persistence and Furthermore for front end we used three.js We used jinja templating to create dynamically updated content, Glitch and CodePen for a visually appealing and 3-D frontend.

Challenges we ran into

Connecting frontend with backend, getting rate limited, learning new technologies, assets going up and down,

There were a lot of challenges we faced , first and formal a lot of our team member are first time hackers, and this was a difficult project to take on. The first obstacle was setting up WEBrtc, and creating realtime collaboration with the users, thankfully we found a good library that takes care a lot of the moving parts with this, however, a lot of mix and match with async methods took up a lot of time. And also how to create persistence in the vr space The second challenge was getting structured and good output by chatgpt, after searching a lot and reading papers, I implemented the pydantic referencing to this article https://medium.com/@jxnlco/bridging-language-model-with-python-with-instructor-pydantic-and-openais-function-calling-f32fb1cdb401 . We also took up a lot of time to see how we can give chatgpt the context of how things should be placed, and after reading lots of papers, we looked at a HOLODECK paper, that helped us get good with prompting. Dealing with llms were interesting. The third was trying to render bunch of 3d models to create spaces, our space is still slow, but it’s feasible

Accomplishments that we're proud of

Getting an output that works and is collaborative

What we learned

Everything

What's next for VR Connections

Persistent video streaming

Built With

Share this project:

Updates