Inspiration

Event planning (e.g., for conventions, house parties, wedding ceremonies, etc) in big venues can be complex and cumbersome. With the help of Mixed Reality, we can scale big venues to table-top or hand-size room models, easing collaborative annotation and holistic design.

We designed this experience with UX and accessibility as a priority, which meant that most of our challenges focused on crafting an intuitive flow. To ensure that the project could evolve beyond a hackathon scope and hold real-world value we had to discuss and analyze the core requirements and their design. By overcoming technical hurdles, we aimed to create a solution that is both market-ready and capable of providing practical, long-term usability.

What it does

Given the problem and opportunity, we propose EventXR, an MR app that allows annotation of the entire space, enabling the user to easily navigate without physically moving within the space. It also provides visualization of different furniture and/or centerpieces.

EventXR provides three main features:

  • Meta space setup: facilitates users to scan their space and generate room models,
  • Room model scaling: scale the generated room models to preferred sizes (scale down to hand-size for easier design and manipulation, scale up to real-life-size for immersive evaluation),
  • Location-based objects & annotation: leave location-based 3D objects, and annotations such as texts, drawings, walking paths.

How we built it

We built the app using Meta's Presence Platform with Passthrough as well as using the Logitech MX pen. Below is the system architecture from the technical stack to application features. Using

Below we detail how to use EventXR from users’ perspective. Starting from launching EventXR, the system asks for user permission about space scanning. entryimage

allowimage EventXR also instructs the uses of the left-hand Meta Quest controller and right-hand Logitech MX Ink stylus. controllerimage

logitechimage

After the space scanning and user instructions, a generated room model in a small size shows up in mid-air, giving a holistic view of the available space. The user can use the left-hand controller joystick to scale the room model up and down. miniimage

The image below shows the user scales the room back to real-life size. liveimage Users can then choose to enable drawing mode from a stylus control panel and choose a drawing color to leave location-based notes. The images below show the user drawing two flowers on the sofa table, and then scale the room model down to hand-size to have a holistic view of the space design.

sofaimage

drawimage

fullimage

The event planning can also go the other way around. The user can leave location-based notes on the small-size room model, and then scale down the room model to hand-size. By this, first the user can easily leave annotations from a bird's view in the small-size room model, and then evaluate the annotation while walking around in the real-life-size room model.

Challenges we ran into

Multi-device input introduced complexity, especially for splitting interactions between the controller for scene mesh scaling and contextual object manipulation and the pen for precision writing/drawing. Using the brand-new Logitech Pen also was a large challenge, whether it was the SDK or working around some known bugs. The pen was also unstable at times and required creative solutions.

When scaling up or down, the scene object needs to return to its original orientation and position after manipulation. For the best UX and accessibility, we also had to make sure that the scaling down of the scene object always brought the object in front of the user's eyes and moved with it. Achieving this required careful control of Unity’s transformation and object hierarchy system to track and reset scale and rotation values.

We also encountered a challenge with the mesh material shaders, particularly with how passthrough and blend modes interacted. Our goal was to make the scene mesh gradually fade out as it expanded. However, while adjusting the material's shader, we found that the alpha channel was being ignored. Instead, we discovered that setting the RGB color to black was the key to making the mesh walls invisible, requiring us to redesign how we controlled transparency.

Scaling the annotations and the furniture correctly with the space was also a challenge. In particular, making sure that the annotations that were made on the small space would scale up nicely when the space is expanded.

Accomplishments that we're proud of

  • Even though we all just met, we were able to get a good sense of each other and distribute workload by playing to each team member’s strengths.

  • We overcame all kinds of technical hurdles, such as integrating complex input systems, handling shader issues, and refining interaction mechanics, all while prioritizing user experience and accessibility.

  • Managed to achieve an impressive amount in just a weekend. We had a lot of fun making the video for our product and going over the scenarios.

  • To test and gather material, we moved the furniture at the Epicenter entrance more times than we can count.

  • Built a product that is not only robust and flexible but also scalable beyond the hackathon, with potential for real-world use and many cool features left to add.

  • Watching the product in action and discovering how fun, engaging, and genuinely useful it is for users. We had exceptional teamwork, maintaining clear communication and collaboration throughout the process.

What we learned

  • The importance of clear communication and task distribution when working under pressure. How crucial it is to prioritize user experience and accessibility early in the design process to guide development decisions.

  • Gained deeper knowledge of integrating multiple input systems, including the complexities of handling brand-new devices (e.g. Logitech Pen) that are in experimental mode.

  • How to efficiently troubleshoot and reach out to the mentors for help.

  • Adding haptics as an effective feedback mechanism.

  • Learned how even in short development cycles, focusing on building a robust, flexible foundation ensures the project can scale beyond the initial scope.

  • Rapid prototyping

  • The value of teamwork and collaboration, recognizing how each member’s unique skills contribute to the success of the project.

  • The importance of sleeping and eating.

What's next for EventXR

  • An in-depth market research will help us explore how EventXR can solve real-world problems in industries like architectural design, event planning, and interior design. These fields could benefit from the dynamic, spatial layout adjustments that EventXR offers.

  • A future direction could involve adding multiplayer functionality, allowing multiple users to collaborate and interact within the same space, ideal for team-based design projects or remote planning.

  • We see potential in adding AI-driven procedural mesh generation, which could allow users to automate certain design tasks, speeding up room setup and object placement for event or spatial planning using their voice or specific drawing patterns and writing shorthands.

  • Refinement of the user interface and interactions could make the experience even more intuitive and smooth, improving accessibility for users across different skill levels.

  • Integrating photogrammetric scanning to capture the texture of the real-world environment and creating an accurate digital twin of the space

  • Extrapolating from building blueprints for rapid prototyping

  • There's potential for expansion into the gaming industry. One exciting idea suggested by a mentor was to transform this concept into something like The Sims. Imagine virtual characters living in your space—where you could interact with them on a small scale or expand them to your own size, watching them navigate and engage in activities within the same environment you inhabit.

Built With

  • logitech
  • meta
  • meta-presence
  • procedural-ui-image
  • quest-3
  • sloyd
  • unity
Share this project:

Updates