Inspiration
My motivation for starting the Pan-Optics Project stems from the Theme of our current Hackathon: "Make It Fun". We often take our vision for granted: the average human enjoys a 220° horizontal and 135° vertical Field of View. Pan-Optics explores what happens when we break those biological constraints. I'd say my inspiration also stems as a branching concept from a drone piloting experiment I participated in; participants were given a VR headset that was linked to a drone to both pilot the drone and a small ground vehicle to perform tasks.
It's a pretty awesome gift of our biology to hone in on smaller details with our Binocular Vision at the center of our vision while also being able to capture a sense of our surroundings with our Peripheral Vision. I've set out to remind everyone of just how valuable that is, and to also explore what else may be possible in the realm of our perception.
What it does
What the project is supposed to do is manipulate the user's peripheral vision in order to simulate different positions at which eyes can be located. Starting with different spaces between the eyes in people and ending with some wacky stuff like horse(Equine) peripherals or those of a frog(Anuran).
Essentially: Standard Human: The baseline for comparison. Equine (Horse): Lateral eye placement with a massive horizontal FOV but a central blind spot. Anuran (Frog): Upward-tilted, high-periphery sensing.
The system was meant to do this by projecting a camera's image per each eye onto a quad plane that was set at the face of the VR Headset as a mask, where the quads for each pair of cameras would render their respective views, able to simulate any Field of View respective to the customized location of an eye on any animal.
How I built it
Engine | Godot 4.x (OpenXR) Graphics | GLSL/Godot Shaders Hardware | VR Headset + Godot XR Tools Concepts | Binocular vs. Peripheral Vision Modeling Implemented a custom VR Rig based on OpenXR standards
Challenges I ran into
During the Planning Phase, I had attempted to use Unity to develop my software, but slowly learned that Unity's own consumer safety features (made for preventing users from getting headaches/motion sickness via misaligned lenses) were deliberately barring me from performing my experiment.
To fix this, I chose to focus my efforts on learning and using Godot for VR. This is my first time actually delving into using the game engine, and merely learning it was an insane undertaking as it required I learn as many of Godot's 3D and 2D subjects as possible since both are important in a VR environment, and its node-based architecture is a far cry from Unity's own object system. I'm proud I could make a solid VR Scene in the time I had.
Accomplishments that I'm proud of
I'm proud of the sheer amount of content I've come to learn in 36 hours. I learned to use a Game Engine all the way to the point of being able to create a VR application (a huge undertaking to manage in such a short time).
However, I'm most proud that I came here to work on it on my own. It was a challenge I needed, and I feel a sense of satisfaction for how far I've come in this solo project.
What I learned
I learned to use a new Game Engine, one where I can quickly and freely whip-up and mess around with new ideas or concepts I wish to experiment with. This empowers me greatly to engage in the process of creating awesome things!
I even learned a bit more about biology regarding how our eyes work, and made explorations into how we're able to navigate our world beyond just using our eyes, but account for other sensory factors that create our sense of spatial awareness.
What's next for Pan-Optics
Getting It To Work - despite all my troubleshooting, I could not get my display to properly capture the experience within the bounds of the Hackathon. The viewports exist, but are somehow unrenderable to the headset. My next immediate steps with this project is finding a way to get the viewports to render for the main camera, and masking that over the visual sector of the headset. There seems to be a lot of nuance with Vulkan involved in getting an object that close to the headset to be agreeable, so I will spend my time exploring how I could manage this.
Research! - This whole idea is not much anything like I can find online. If entire Game Engines are restrictive to the concept of alternative optics, I'm sure there is plenty of research to be done to better study how we can manipulate our sight to gain an advantage in various situations.
Drone Piloting - I see late-stage applications for this idea in drone piloting, or really piloting any unmanned vehicle or robotic tool imaginable. It's something users/pilots would have to become more used to over time, but I do see people adopting alternative options for vision so that we can optimize the spatial awareness of the pilots when operating complex machinery.
Built With
- godot
- openxr
- vr
Log in or sign up for Devpost to join the conversation.