Inspiration

Dominoes are an art of patience and fleeting beauty. We all remember that childhood feeling: holding our breath, placing the last tile on the living room rug, knowing that hours of creation would vanish in seconds of kinetic joy. We wanted to recapture that pure, tactile magic—where the digital meets the physical—allowing us to build the impossible structures of our dreams without boundaries or cleanup. It’s a love letter to the "Magic Moment" of creation and destruction.

What it does

Users pinch their fingers to "draw" lines of dominoes directly onto their real-world floor. You can add gimmicks like slopes and stairs. The "Magic Moment" happens when you physically reach out and topple the first domino with your finger, triggering a satisfying, physics-based chain reaction that interacts with your real environment.

How we built it

Built in Unity for Meta Quest.

  • Interaction SDK: We leveraged Hand Gestures to create a seamless control scheme. Instead of complex menus, users utilize natural gestures for UI switching and Domino Spawning (Pinch), making the interface feel invisible.
  • Passthrough Camera API: To achieve true immersion, we used this API to analyze the real-time brightness of the user's room. We fed this data into our virtual lighting engine, ensuring that digital shadows match the intensity and direction of the physical environment.
  • Scene Understanding: Implemented to detect floor height, ensuring dominoes sit perfectly on real surfaces.
  • Passthrough API: Used to ground the experience in reality.

Challenges we ran into

  • Gesture False Positives: Distinguishing between a "Grab," "Pinch," and casual hand movements proved difficult, initially causing accidental inputs. We solved this by architecting distinct Operation Modes. By isolating gesture detection logic to specific contexts (e.g., Drawing Mode vs. Adjusting Mode), we successfully eliminated conflict and stabilized control.
  • Depth & Occlusion: We experimented with Depth API for occlusion but faced edge-blurring artifacts, leading us to prioritize performance and clarity over perfect occlusion for this version.
  • Precision: Preventing dominoes from "sinking" into the floor required fine-tuning the collision detection against the Scene model.

Accomplishments that we're proud of

We achieved a tactile "feel." The transition from "Pinched" (drawing) to "Released" (physics active) feels magical. Seeing virtual shadows cast on the real floor significantly boosted the sense of presence.

What we learned

We learned that Direct Touch is far superior to Ray-casting for this type of toy-like interaction. Users want to feel like they are touching the objects, not controlling them remotely.

What's next

Our true vision is connection. We initially aimed to include real-time multiplayer in this sprint but prioritized perfecting the core physics interaction first. Our immediate roadmap involves implementing Shared Spatial Anchors. This will allow users to share their living room space with friends—either locally or remotely—and build massive, collaborative tracks together. We want to transform solitary creation into a shared kinetic performance, turning every floor into a persistent canvas of art that remembers where you left your last tile.

Built With

Share this project:

Updates