Inspiration: The Hallucination Barrier
Sync addresses the fundamental entropy of generative AI. In software engineering, DevOps practices ensure that configuration changes result in predictable outputs. In AI, however, we still rely on "prompt engineering"—a fragile process where recreating an exact lighting setup across fifty distinct assets is nearly impossible.
We asked: What if we could build GitHub Actions, but for brand visuals? We wanted to replace "guessing magic words" with Configuration Engineering, where visuals are defined by deterministic code.
What it does
Sync is the first Visual CI/CD platform. It allows teams to define their Brand Guidelines as a JSON Specification.
The core concept treats generation as a function: $$f(Subject, Parameters) \rightarrow Asset$$
Where $Subject$ is the creative intent (e.g., "A sneaker") and $Parameters$ are the immutable brand laws (Lighting, Angle, Color Palette).
- Stateful Generation: Users define a
sync.jsonfile. - Deterministic Diffs: Changing a single variable (like lighting) regenerates the entire asset library while preserving the subject and camera angle.
- Commercial Safety: By building on Bria FIBO, we ensure every pixel is indemnity-ready for enterprise use.
How we built it
We built Sync as a developer-first web application using Next.js 14 (App Router) and Tailwind CSS. The interface mimics an IDE (Integrated Development Environment), reinforcing the concept of "coding" visuals rather than requesting them.
The engine relies on the Bria FIBO model. Unlike standard models that parse loose text, we utilized Bria's structured_prompt endpoint to inject strict parameters:
"structured_prompt": {
"lighting": { "conditions": "studio_lighting" },
"photographic_characteristics": { "camera_angle": "low_angle" }
}
We implemented a server-side proxy to handle authentication and manage the synchronous API workflow, ensuring a responsive user experience.
Challenges we faced
The primary challenge was the paradigm shift from "chatting" with AI to controlling it. We had to map abstract creative ideas (like "make it look cool") to strict FIBO parameters. Technically, moving from standard asynchronous generation to a real-time synchronous workflow required robust queue management in the frontend to handle batch operations without stalling the browser.
Accomplishments that we're proud of
We are most proud of the "Refactor" workflow. Successfully changing the global JSON configuration from "Studio" to "Neon" and watching the entire grid of images update—keeping the exact same camera angles while swapping the lighting environment—validated our thesis that version control can exist for art.
What we learned
We learned that determinism is the key to Enterprise AI. Large organizations prioritize consistency at scale over random creativity. By using Bria FIBO, we proved that separating "Subject" from "Style" is possible and necessary for professional workflows.
What's next for Sync
We are actively expanding Sync to support Video & Motion CI/CD. Motion Configs: Defining camera pans, zooms, and transition speeds in JSON. Git Integration: A GitHub Action to trigger a "Visual Build" upon code commits. Figma Plugin: Enabling designers to pull the latest generated assets directly into their workspace.
Sync is the first step toward true Infrastructure-as-Code for the creative world.
Built With
- bria
- nextjs
- tailwind
Log in or sign up for Devpost to join the conversation.