Inspiration We were inspired by the sheer volume of legally non-compliant, off-brand images generated daily by consumer AI tools. For enterprises, trust and fidelity are non-negotiable. Generic generative AI is useless because it cannot guarantee brand consistency or legal indemnity.

Our inspiration was to leverage FIBO's JSON-native control—its most powerful feature—to create a verifiable chain of custody. We sought to solve the most expensive problem in creative production: Brand Compliance Quality Assurance. Our goal was to build a system where failure to comply is automatically corrected or logged, never shipped.

What it does ProvenanceAgent is a deterministic compliance engine that transforms vague creative briefs into guaranteed, brand-safe visual assets.

Agentic Correction Loop: It autonomously audits the initial FIBO JSON prompt against a strict, defined Brand Guide JSON (e.g., specific hex codes, lighting types, camera angles).

Autonomous Fix: If non-compliant, the agent (LLM) is recursively instructed using the audit log to correct the JSON parameters until 100% compliance is achieved.

Verifiable Output: The image is only generated from the final, compliant JSON. This JSON recipe is immediately hashed, and the hash is logged to an Immutable Ledger (proof-of-concept), creating a Verifiable JSON-Chain Provenance.

The project effectively serves as an automated QA layer for enterprise visual AI, eliminating the possibility of shipping off-brand or non-compliant content.

How we built it We focused on building a robust Python core around the Agentic Loop concept.

The Audit Core: We built the check_compliance function, which is a fast, deterministic checker that compares the draft JSON against our Brand Guide JSON (the source of truth).

The Agentic Logic: We simulated the LLM's correction behavior with the MOCK_LLM_CORRECTOR to prove that the agent can successfully resolve specific violations (e.g., switching lighting from "Harsh Top Light" to "Studio Softbox").

Hard Stop Logic: We implemented the Degradation Detection and Max Attempt Stop inside the while loop to guarantee resilience and prevent cost overruns from infinite recursion.

The Pivot: Due to the instability of the local FIBO repository and various API endpoints, we quickly pivoted to an API-centric architecture and used a robust mock service to prove the complex Agentic logic works, reserving the API integration for post-hackathon scaling.

Challenges we ran into The primary challenge was environmental instability and the risk of scope creep.

FIBO Environment Instability: The sheer complexity of setting up the local FIBO repository led to repeated ModuleNotFoundError and CUDA Assertion failures. We ruthlessly killed the local approach and pivoted to a mock/API-centric architecture to save the timeline.

LLM Semantic Drift: In early testing, the LLM-correction mechanism sometimes introduced new violations while fixing old ones. We solved this by implementing the Degradation Detection and Hard Stop Constraint, making the agent deterministically fail rather than generate bad output.

API Volatility: The public API endpoints for the FIBO model proved unreliable or incorrectly documented (Route Not Found errors), reinforcing our choice to build the verifiable system independently of a single provider.

Accomplishments that we're proud of We are proud of transforming a chaotic creative process into a predictable, verifiable pipeline.

Functional Agentic Loop: We successfully built and demonstrated a 100% functional audit and self-correction loop that resolves multiple brand violations and guarantees compliance.

Verifiable Provenance (The Win): We implemented the logic to hash the final, compliant JSON and log it with a unique ID, providing the unforgeable proof required by legal and brand compliance teams.

Enterprise Resilience: The Hard Stop Constraint is a feature designed for enterprise use, proving that our agent is cost-aware and failure-tolerant.

What we learned We learned that building trusted enterprise AI requires a shift in focus: Trust is built on deterministic auditing, not on the generative power of the LLM. The LLM is excellent at translating intent into JSON, but a simple, rigorous Python function is required to enforce the rules. The code that checks compliance is more valuable than the code that generates the image.

What's next for ProvenanceAgent: Agentic Brand-Safe Image Factory The plan is to move from proof-of-concept to a scalable, production-ready tool.

Ledger Integration: Replace the mock hash log with a persistent, immutable ledger (e.g., AWS QLDB or a controlled Hyperledger instance) to provide real, cryptographically secure proof.

LLM Integration: Integrate the correction loop with a reliable LLM API (Gemini or GPT) to enable actual JSON self-correction, graduating from the mock.

UX Focus: Build the final user interface (UX) for Brand Managers, allowing them to easily upload and manage the Brand Guide JSON and view the Compliance Certificate PDF.

Built With

Share this project:

Updates