Inspiration

We started as artists, not toolbuilders. The goal was simple: create a fictitious AI rapper called Boy Math—a character with a voice, a look, and a worldview that could live across tracks, visuals, and social platforms.

Boy Math quickly became real in the only way that matters online:

~1,000 TikTok followers

Higgsfield prizes

A slot in an AI art festival, including a bold MLK video moment

That traction wasn’t luck—it was repetition: generate, refine, ship.

The problem we ran into (fast)

The character development process created an explosion of assets:

lyrics, drafts, punch-ins, alt verses

generated images, moodboards, character stills

video generations, variations, upscales, final edits

screenshots, PDFs, prompts, prompts of prompts

“this one is version 7 but actually better than 9” chaos

What it does

The bottleneck isn’t generating content anymore. The bottleneck is organizing it into a production-ready structure you can trust.

We wanted a system that treats creative output like a real pipeline:

traceable

versionable

exportable

standards-valid

and still friendly to messy creator workflows

We built a studio app that takes messy multimodal input and turns it into a single, structured CreativeWork—fast.

Inputs we support

Paste text

Upload PDFs (including full screenplays)

Upload images and videos

Paste URLs

Paste/import OMC JSON as literal (not “interpreted”)

What the app generates

A project outline hierarchy

A graph view of the CreativeWork (nodes + edges)

A screenplay view (verbatim when the user provides a script)

A Concept layer that bridges assets to story entities

Hackathon-grade export of valid OMC 2.8 JSON

whole project export

single-node JSON copy/download from the inspector

The core design principle: traceability

Every upload becomes an Asset node immediately, with provenance (path/hash/metadata). Nothing gets “lost.” Unassigned assets are allowed—but never invisible.

How we built it

Challenges we ran into

1) Speed vs correctness

Users want instant feedback, but OMC validity requires discipline. We had to design an ingestion flow that:

feels immediate (optimistic UI)

streams incremental updates

and still ends with strict schema-valid output

2) UI and JSON must agree

We kept hitting a classic failure mode:

the UI looked right

the export failed validation, or relationships didn’t match what the UI implied

So we enforced a canonical rule:

concepts and assets must be linked schema-natively

no non-schema relationship shortcuts

export filters must match UI filters exactly

3) “Verbatim screenplay” handling

If the user uploads a screenplay, the app must respect it. That meant building a path where we annotate and structure around the text—without rewriting it silently.

4) Multimodal ambiguity

A video might be:

source footage to structure scenes from

or just mood/reference

An image might be:

character canon

wardrobe detail

prop closeup

or pure vibe

The solution wasn’t guessing. It was asking fast, minimal clarifying questions to capture creative intent.

Accomplishments that we're proud of

Turned “creative chaos” into a product insight. Boy Math generated more media—images, videos, prompts, drafts—than a normal workflow can reliably track. That pain became the reason OMC Studio exists.

Built a standards-first story graph. We’re not just organizing files—we’re structuring a single CreativeWork into a graph + screenplay + hierarchy and exporting valid OMC 2.8.

Shipped multimodal ingestion. Paste text, upload PDFs, images, videos, URLs, or OMC JSON—then watch structure appear.

Enforced schema-native relationships. We moved away from “shortcut fields” and aligned Concepts/Assets with OMC: Concepts via Context.ForEntity, assets via Asset.Context.

Made export tangible for creators. Whole-project export and single-node JSON copy/download from the inspector—so users can actually reuse and integrate structured outputs.

Focused on hackathon-grade reliability. We prioritized judge-proof behaviors: validity gates, removing illegal keys, preserving customData, and making UI/JSON agree.

What we learned

Boy Math is the proof: generative tools can create real cultural artifacts quickly. But the next era isn’t just “make more.” It’s:

make more and ship cleanly

make more and stay organized

make more and preserve provenance

make more and build worlds that don’t collapse under their own outputs

That’s what this tool is for: turning the explosion of generative content into structured production reality.

What's next for OMC Studio

Make OMC Studio feel instant, work offline-first, and keep every asset traceable on disk—so creators can point the app at a folder (or drag-drop a pile) and get a structured CreativeWork without waiting on the network.

Share this project:

Updates