Inspiration

We are living in the era of AI Coding Agents. Tools like Antigravity, Cursor, and GitHub Copilot are writing code faster than any human can review. While this speed is incredible, it comes with a hidden cost: Security. AI agents are confident but often unaware of specific compliance rules. They might hardcode an AWS key for convenience, ignore a GDPR consent requirement, or import a vulnerable library. We realized that as the world moves toward automonous coding, we need a dedicated "Safety Layer" a way to let AI run fast while ensuring it doesn't break the rules. This inspired ComplianceGuard.

What it does

ComplianceGuard is an intelligent Pre-Commit Hook and Audit Dashboard that acts as a semantic firewall for your codebase. It intercepts AI-generated code before it enters the repository.

  1. It blocks obvious threats (like API keys) instantly.
  2. It analyzes complex logic (like PII leaks) using Gemini Model.
  3. It educates the developer (or agent) by explaining why the code was rejected and calculating the potential financial risk (e.g., "$20M GDPR Fine").

How we built it

We built a "Hybrid Engine" to balance speed and intelligence using the Google Antigravity.

  1. Rust Core (The Speed Layer): We used Rust for the immediate pre-commit interception. It runs high-speed Regex and pattern matching to catch deterministic errors in milliseconds.

  2. Google Gemini Model (The Brain): For semantic understanding, we integrated Gemini. It reviews the context of the code. For example, is this data export compliant? Is this unencrypted transfer risky? Gemini provides the reasoning that Regex cannot.

  3. Next.js Dashboard: A modern, reactive UI where developers can audit blocked commits, view the "Financial Risk" assessment, and see the "Fix It" suggestions generated by Gemini.

Challenges we ran into

The biggest challenge was latency. A pre-commit hook cannot take forever. We had to architect the system so that simple checks happen locally in Rust (instantly), and only complex, high-stakes files are sent to Gemini for deep analysis. Orchestrating this handoff between a local system binary and a cloud AI model required careful design to ensure the developer experience remained smooth.

Accomplishments that we're proud of

  1. The Hybrid Engine: Successfully creating a bridge between low-level system code (Rust) and high-level cognitive AI (Gemini). Seeing them work in tandem; Rust for speed, Gemini for intellect; was a huge milestone.

  2. "Dollarizing" Risk: We're proud of the feature that translates a technical bug into a Financial Risk assessment (e.g., "$5M Risk"). It completely changes how non-technical stakeholders view security.

  3. Educational Impact: Instead of just saying "Error: Line 40", our tool behaves like a Senior Engineer, explaining the why and educating the user to be a better developer.

What we learned

We learned that Context is King. A line of code that looks innocent in isolation (e.g., send_data(user)) can be a critical vulnerability depending on where it sends that data. Traditional static analysis tools miss this. By using Gemini's large context window, we could catch vulnerabilities that standard linters completely ignore.

What's next for AI ComplianceGuard

We plan to expand ComplianceGuard from a pre-commit hook into a full IDE Extension. Imagine the "AI Safety Mentor" living right inside VS Code, correcting the AI Agent as it types, rather than waiting for the commit. We also want to add support for "Auto-Fixing" via Pull Requests, where ComplianceGuard doesn't just block the code but proposes the secure alternative automatically.

Built With

Share this project:

Updates