Inspiration

I’ve watched people burn hours trying to turn a design idea into a 3D model, only to end up with something that still doesn’t match what they had in mind. And it’s not just professional designers. From kids to grandparents, everyone has the same question: “What if it looked like this instead?” MakeItReal is about making that question tangible, instantly. No outsourcing, no prep, no waiting days for a model. In education, it means you can generate supporting 3D visuals on the spot to strengthen explanations. For creators and hobbyists, it means ideas become objects you can inspect and iterate immediately. For kids, it means imagined toys can appear in front of them and become something they can actually play with.

What it does

Using MX Ink, the user sketches a product/object concept in Mixed Reality with crisp, bright 3D strokes anchored to the real world, then adds sticky-note style constraints (dimensions, material, “cut here,” “make this bigger,” etc.). With a single button press, the app captures a snapshot of the concept, interprets the design intent, and generates a fast 3D result in ( \approx ) seconds. The generated 3D object appears where it was designed, and the user can grab, rotate, scale, and inspect it freely, like handling a real prototype, but instant.

How we built it

MakeItReal is planned as an MR rapid-prototyping workflow on Meta Quest that treats MX Ink as a design-intent capture tool:

  • World-anchored 3D sketch strokes + 3D “sticky note” cards for constraints
  • One-tap capture to send the snapshot + intent metadata into the generation pipeline
  • Prompting and guidance designed to keep the generation focused on the target object, rather than reconstructing the entire scene/background
  • Fast image-to-3D generation, then placement back into the same anchored transform (stable scale + position)

Challenges we ran into

  • The biggest challenge is ensuring the pipeline generates the intended object instead of “helpfully” reinventing the environment. This requires carefully designed prompting/guidance that consistently steers generation toward the product.
  • In XR, the hard part isn’t only generation time, it’s making the output appear with stable placement and believable scale in the user’s space.
  • The 3D output must remain lightweight enough to stay smooth on Quest (asset size, complexity, textures).

Accomplishments that we're proud of

  • Designing a stylus-first workflow where MX Ink isn’t a menu pointer, it’s a natural sketch-and-constraint language: draw, annotate, capture.
  • Turning “what if it looked like this?” into a repeatable, demo-friendly MR flow that anyone can understand in seconds.
  • Building a concept that clearly applies to multiple audiences: product design, education, creative exploration, and kids’ imagination.

What we learned

  • In XR, productivity comes from direct manipulation, not complicated UI.
  • Consistent outputs depend less on “better prompts” and more on a clear intent format (sketch + constraints + structured guidance).
  • When people can materialize an idea immediately, they iterate more, and they iterate with confidence.

What's next for MakeItReal

  • Modes for control: a “Concept” mode (more interpretive) vs a “Product” mode (more faithful to constraints and geometry intent).
  • Collections: save, organize, and showcase generated creations as a personal gallery/portfolio.
  • Export: download/share outputs for use in 3D modeling tools (e.g., GLB/OBJ) and downstream workflows like prototyping and 3D printing.
  • Sharing: generate a shareable link/viewer so others can preview the result without needing the headset.

::The video was generated with AI::

Built With

Share this project:

Updates