Noodle

Tagline

Transform your everyday surroundings into an infinite spatial interface for unified creative flow and collaboration. Iterate, refine 2D sketches, input real-time audio prompts, and generate 3D models—without ever touching a keyboard.


Inspiration

Every time a designer switches apps, they lose 23 minutes of focus. Modern creativity is broken.

To take an idea from a paper sketch to a 3D concept, a creator must juggle an average of 10 different applications—scanning, uploading, prompting, downloading, and file management. This constant context switching creates a "Toggle Tax" that kills creative flow.

We asked ourselves:

  • What if the tool didn’t force you to leave your environment?
  • What if you could pull a drawing off your physical desk, connect it to an AI brain in mid-air, and see it become a 3D reality instantly?

We built Noodle to eliminate the friction between Idea and Reality. It is a spatial, node-based workflow that lets creators dream with their eyes open.


What It Does

Noodle is a Mixed Reality creative workbench built for Snap Spectacles, turning your physical surroundings into an infinite canvas for Generative AI.

Core Capabilities

  • Reality Capture Using the Spectacles’ cameras, users can grab a physical sketch from their real-world desk, instantly creating an Input Node in AR.

  • Spatial Logic Users drag and drop nodes to build logic chains in mid-air. Connect a Voice Node ("Make it cyberpunk") to a Sketch Node using intuitive hand gestures.

  • Generative Flow The system fuses visual input and voice prompts to generate high-fidelity 2D concepts in real time.

  • 2D to 3D With a single wire connection, a 2D concept is transformed into a fully spatial 3D model that sits on your physical desk, ready for inspection.

  • Multi-Modal Ideation Supports text, image, and 3D generation nodes, all interacting within a live, spatial graph.


How We Built It

We built Noodle using Lens Studio 5 and the Spectacles Interaction Kit (SIK) to create a truly native, hand-tracked experience.

The Interface

  • Leveraged SIK for pinch, grab, and scroll physics
  • Built a custom Wire System using TypeScript to dynamically render connections between node collider ports

The Experience

  • Used Lens Studio’s UI system to create a radial Hand Menu
  • Allows users to spawn new nodes (Image, Text, 3D, Modifier) anywhere in their space

Challenges We Ran Into

  • Latency vs. Flow Generative AI can take 10–30 seconds. To preserve Flow State, we designed visual feedback loops that made waiting feel like part of the creative process—not lag.

  • Input Constraints Typing on AR glasses is painful. We prioritized a Voice-First and Gesture-First interaction model so users never need a keyboard.


Accomplishments We’re Proud Of

  • A fully functional end-to-end pipeline: Capture → Voice Prompt → Connect Nodes → Generate Image → Generate 3D Model

  • The feel: plugging a digital wire into a node is deeply tactile and satisfying

  • A custom-built, robust Hand Menu for spawning and managing nodes


What’s Next for Noodle

  • Multi-User Collaboration Implement Snap’s Connected Lenses so multiple designers can work on the same node graph simultaneously.

  • Export Pipeline Enable users to AirDrop final 3D .obj files directly to their laptops for refinement in Blender or Maya.

  • More Nodes Introduce Physics Nodes (gravity, wind) to simulate environments—not just static objects.

  • Cross Platform Evolving Noodle beyond Spectacles into a cross-platform creative system, allowing workflows to seamlessly move between AR, desktop, and tablet—so ideas can start anywhere and continue everywhere.


Built With

  • Snap Spectacles (Gen 5)
  • Lens Studio
  • TypeScript
  • Generative AI APIs
  • Figma (UI Design)

Team

  • Stacey Cho
  • Neha Sajja
  • Ash Shah
  • Kavin Kumar Balamurugan

Built With

+ 3 more
Share this project:

Updates