Inspiration

Most AI tools feel like playing roulette with your ideas. You write a prompt, cross your fingers, and hope the output matches your vision. We wanted something better—something tactile, visual, and grounded. So we built a way to sketch in 3D, direct AI with structure, and bring your scenes to life in motion.

What it does

AI3D Primitives turns Adobe Express into a spatial AI sketchpad. Start with 3D primitives—cubes, spheres, cones—and shape your scene as if you're blocking out a stage. Describe your concept in natural language, and AI helps fill in the details. When you’re ready, scan a QR code to step inside your composition in Augmented Reality and tweak it in real space.

Then comes AI3D Render Mode: select what you want to animate, frame it up, and generate a video—frame-by-frame, with precision and intention. Less "generate and hope." More "direct and create."

How we built it

Adobe Express Add-on SDK to embed our UI natively

Unity WebGL + iOS/Android AR Foundation for intuitive 3D layout and integration with reality

Also AI3D Render with Keyframe control of AI video (we showed this at CVPR 2025 Demo and also brief 30s demo / mention during Orals!)

AI3D Co-Create wiht AI3D Render will be at SIGGRAPH!

Scene metadata packed into a QR code for quick AR access

A custom backend pipeline (built on AI3D RenderFlow) that interprets scene structure for AI video generation

Built a communication layer between the web frontend and Unity using Jint and C#

Challenges

WebGL inside Adobe Express wasn’t happy at first—took some clever sandboxing

Getting AI to respect our 3D layout instead of overriding it required training and a lot of fails

QR handoff between web and mobile AR had to feel instant, not clunky

Composing for AI is different from composing for humans—it needed a new language of control

Proud of

A real pipeline from static 3D sketch to dynamic AI motion, all from inside a design tool

Seeing people walk around their AI-generated scenes in AR and actually recompose them spatially

That moment when a still image, structured just right, becomes a moving, living thing

Pulling this off with a lean team and no shortcuts

What we learned

Creators want control, not chaos

Structure matters—even simple geometry makes AI outcomes wildly more intentional

AR isn’t just for showing off; it’s for rethinking how we create

People want to direct AI, not just suggest to it

What’s next

Real-time collaborative scene editing

Physics-based primitive interactions

Keyframe-style motion planning

Export to After Effects with motion baked in

Native mobile app for scene authoring

Plugging into more generative backends (not just ours)

Built With

Share this project:

Updates