How we built it
We built Cerberus using a FastAPI backend to clearly separate concerns. Vertex AI handles the log diagnostics, while our custom Python modules act as strict security layers. PII scrubbing, zero-trust filtering, and RSA-PSS cryptographic signing form the guardrails. We leveraged Auth0 for RBAC, Token Vaulting, and triggering Step-Up MFA.
Challenges we ran into
Taming AI hallucinations was our biggest hurdle. We designed deterministic filters to intercept destructive commands (like rm -rf) before execution. Balancing strict schema enforcement with a seamless Auth0 MFA workflow—without ever leaking GitHub tokens into local storage—proved incredibly complex.
Accomplishments that we're proud of
We shipped a true Defense-in-Depth pipeline. We proved it’s possible to give an AI root-level problems without giving it unchecked access. Implementing our anti-hallucination filter alongside cryptographic payload signing guarantees every generated command is genuinely secure.
What we learned
LLMs are brilliant diagnosticians, but execution requires a tight leash. We discovered Auth0 is exceptionally powerful for driving zero-trust workflows, and that autonomous AI is only as safe as the wrapper surrounding it.
What's next for Cerberus
We plan to package our security layers into an open-source Python SDK (cerberus_shield) and build a verified execution agent that sits directly on target hosts.
Bonus Blog Post: AI Needs a Seatbelt
Late nights staring at server crash logs taught us a vital lesson: autonomous AI without a safety catch is a disaster waiting to happen. We initially set out to build a smart IT remediation bot, but quickly realized the real innovation was the "nervous system" we built to control it. By wrapping the LLM in deterministic filters, strict MFA, and cryptography, we accidentally built a Zero-Trust Web Application Firewall for AI. The future isn't just capable agents—it's secure agents.
Log in or sign up for Devpost to join the conversation.