FIBO Scene Director
Inspiration
We were tired of Prompt Engineering.
In professional creative workflows—whether film, photography, or game design—directors don't guess magic words. They give specific instructions:
- "Use an 85mm lens"
- "Place the key light at 45 degrees"
- "Make the shadows harsh"
Current text-to-image models force us to play a guessing game, hoping the AI understands cinematic the same way we do. When we saw Bria FIBO's ability to accept structured JSON input, we realized we could build something different: a tool that replaces probabilistic prompting with deterministic direction.
We wanted to build a cockpit for creativity where you don't ask for an image—you direct it.
What it does
FIBO Scene Director is a visual interface for the Bria FIBO model that gives users granular control over every aspect of a scene without writing a single prompt.
- JSON-Native Control: Users manipulate sliders and presets for Camera (focal length, depth of field), Lighting (direction, softness), and Composition. The tool translates these into the strict JSON schema Bria requires.
- Storyboard Mode: A killer feature for consistency. Users can lock a subject and lighting setup, then automatically generate Wide, Medium, and Close-up variations instantly. This is impossible with standard prompting.
- Director's Co-pilot: An LLM-powered assistant that understands context. You can say "Make it moodier" or "Change the lens to macro," and it updates only the relevant JSON fields while keeping the rest of your scene intact.
- Compare View: A history tool that lets users side-by-side compare how a single parameter tweak (like changing f/1.8 to f/11) affects the final render.
How we built it
We built a full-stack application designed for production workflows:
- Frontend: React + Vite for a responsive, professional UI. Tailwind CSS for distinct panels (Camera, Lighting, Composition).
- Backend: Node.js & Express to handle orchestration.
- Validation: Zod enforces a strict schema matching Bria's API documentation, ensuring every request is valid.
- AI Integration:
- Bria FIBO API: The core engine for image generation.
- LLM (OpenAI/Compatible): A "few-shot" system prompt teaches the LLM to speak FIBO JSON, acting as a translator between natural language and structured data.
- Bria FIBO API: The core engine for image generation.
Challenges we ran into
- Descriptive vs. Technical Gap: Initially tried raw 3D parameters (XYZ coordinates), but Bria v2 API responded better to structured descriptive tags (e.g.,
photographic_characteristics). We refactored schema/UI to align with how the model "thinks." - Context-Aware Updates: Getting the LLM to patch JSON objects without rewriting them was tricky. We implemented logic for selective updates.
- Determinism: Ensuring "Seed 42" produced the same image every time required careful state management in React.
Accomplishments that we're proud of
- Storyboard Mode: Generating three consistent camera angles of the same scene was a breakthrough moment.
- Visual Presets: Moving away from text inputs—clicking Golden Hour or 85mm Portrait instantly updates JSON.
- UI Design: It looks like a professional dashboard for a Director of Photography, not a chatbot.
What we learned
- Structure > Randomness: Constraining AI with structured input (JSON) skyrockets quality and consistency.
- The Role of the Human: AI should accelerate creativity, not replace it. Giving users control over technical parameters empowers artistry.
What's next for FIBO Scene Director
- Workflow Integration: Exporting scenes directly to ComfyUI nodes for batch processing.
- Advanced Control: Integrating ControlNet (depth/canny) so users can sketch compositions alongside JSON parameters.
- Video Generation: Using Storyboard Mode logic to generate consistent keyframes for AI video storytelling.
Built With
- briafiboapi
- express.js
- javascript
- node.js
- react
- tailwindcss
- vite
Log in or sign up for Devpost to join the conversation.