Link To Slides:

link

Elevator Pitch:

Catch Me If You Can is a colocated multiplayer app that reimagines hide and seek by immersing your world in complete darkness, altering the way in which you navigate your physical space.

Inspiration

We wanted to build a game that would be easy for anyone to understand, pick up, and play from the get-go. However, we wanted to create a game that deviates from traditional gameplay using the Meta Presence Platform to make our gameplay feel more immersive, playful, and emergent.

What it does

Catch Me If You Can is a game that brings in elements of hide and seek and freeze tag in mixed reality. Players start off in the lobby, where they have the option of hosting or joining a session. The host uses space setup to create the game space and mark out obstacles in the environment. The game requires two or more players to start. Upon starting the game, a random seeker is selected. Upon a short countdown, the play space gets enveloped in an eerie fog and the game timer begins. Equipped with a torch powered by hand-tracking, seekers can only glimpse the shadows of hiders until illuminated by the torchlight illuminating the nearby environment, adding a thrilling challenge. Hiders possess the unique ability to revive teammates, deepening the strategy and risk. Suspense increases as hiders work together to avoid the seeker as the clock is ticking down, and the seeker attempts to eliminate all the hiders.

How we built it

Catch Me If You Can is built with Unity and Meta Quest 3, using many different features of the Meta Presence Platform to create an immersive mixed reality experience. One of the main SDKs the app is utilizing is the Shared Spatial Anchors API. We use them heavily for colocation so that all four players can be synced in the same room and in the same orientation. Another API that is heavily featured in the game is the Depth API. The Depth API is a newer feature usually used to simulate occlusion for virtual environments with passthrough, but we instead use the generated internal depth texture to simulate a murky fog effect with fog cards to limit all players' field of view. The Audio SDK is used to give a lot of spatial clues to players, since the fog makes vision limited. We use the Scene API to set up a space to be shared with others in the lobby for visuals and gameplay. Other SDKs we used from the Presence Platform include the MR Utility Kit (MRUK), XR Interaction SDK, Hand Tracking, Passthrough, and Meta XR Platform SDK.

Challenges we ran into

During our hackathon, we ran into a significant number of challenges and setbacks that proved to be more difficult than we expected. For instance, even though we understood integrating colocation would be difficult, we discovered an unexpected bug that would not let us incorporate colocation with the mixed reality utility toolkit and we potentially had to change our concept completely and not use colocation.

On top of colocation, it was also our team’s first attempt at networking. Getting through a lot of sample code and documentation on day 1, we realized early on that incorporating networking took a lot of overhead to implement, debug, and test. Because our experience also requires 3 of us to effectively test, this was definitely a bottleneck that soaked up a lot of our time.

Another significant challenge for us was figuring out how to incorporate the Depth API for our custom fog shader with passthrough. Even though we got a basic prototype working early on, we had a lot of challenges with getting the right style for the look of the fog and getting it to work within our frame budget. The original version was only running anywhere from 15 to 30 FPS.

Finally, even though our game worked well in build, we discovered a noticeable passthrough flickering bug during recording, requiring lots of a couple graphics workarounds and debugging the recording for a few hours.

Accomplishments that we're proud of

We are really proud of the way the team persevered through the many technical issues throughout the hackathon.

After working 2 days on colocation and some help from Meta engineers, we managed to get colocation working and four players synced to the same game space. We also fixed many multiplayer bugs around syncing game events, lobby management, and more.

We definitely want to highlight our usage of the depth API here. On the visual side, instead of using the depth API as an occluder in some of the Depth API examples, we used it for an in-game depth based fog through a fog card. This took a while to figure out how the Depth API package uses different HLSL shaders and matrices to generate the depth texture, and then repurposing it for our fog shader. On the performance side, we optimized the depth fog shader to use various techniques like changing render scale and enabling different Vulkan properties to bring the FPS up to a performant, steady 72 FPS. With the recording bug, the torches were only flickering while recording in headset gameplay footage. Using a combination of blending and changing rendering queues, we prevented the torch from flickering while still blending it into the scene.

What we learned

We learned that it is really hard to implement multiplayer and colocation, while also keeping the performance and experience stable for everyone. However, Meta’s SDKs really unlocked a lot of possibilities for us to create a game that wouldn’t previously be possible in this form. The building blocks, project setup tool, and the many different samples Meta has created for various SDKs helped us immediately jump directly into implementation and gameplay.

What's next for Catch Me If You Can

Next for the team would be developing more gameplay systems. This includes tutorial systems, more dynamic shaders and environments, and more emergent gameplay with more interactable objects through the game. We want to continue to improve on these so players will have a different experience every time they load up the game with their friends!

Built With

  • presenceplatform
  • quest3
  • unity
Share this project:

Updates