Inspiration
In modern software development, writing code is only half the battle—keeping it clean, understandable, and secure as projects scale is far more challenging.
As repositories grow, tracking code quality, enforcing standards, and catching vulnerabilities becomes complex and time-consuming.
This inspired us to build an AI agent that acts as a smart code guardian, helping developers maintain high-quality and secure code effortlessly.
What it does
Security and Compliance AI is an AI-driven agent designed to integrate deeply into the software development lifecycle by performing intelligent, context-aware code analysis. It systematically analyzes source code to detect complexity issues, security vulnerabilities, and non-compliance with coding standards using a hybrid approach that combines LLM-based reasoning with rule-based validation. The agent generates actionable, context-specific recommendations to improve code quality, maintainability, and security.
Upon generating suggestions, it follows a human-in-the-loop workflow, requesting developer approval before making any modifications. Once approved, the agent can automatically apply code changes, update the relevant files, and merge them into the main repository while maintaining version control integrity. Additionally, it programmatically creates and manages issues for identified problems, enabling structured tracking, prioritization, and resolution within the development pipeline.
Beyond code analysis, the agent provides repository-level intelligence, including locating relevant files within large codebases, retrieving the current status of issues, and monitoring merge requests and update workflows. This end-to-end capability positions AI Code Guardian as a fully autonomous yet controllable AI assistant for maintaining secure, compliant, and high-quality code at scale.
How we built it
We developed AI agent as a custom AI agent within GitLab Duo, leveraging the Web IDE to seamlessly integrate it into the developer workflow. The agent is powered by a structured set of instructions and rule definitions that guide its behavior in analyzing code for security vulnerabilities, code complexity, and best practice compliance.
To enhance reliability, we incorporated configurable parameters for detecting security risks and complexity thresholds, enabling more targeted and consistent evaluations. At the same time, the system is intentionally designed to not be strictly constrained by predefined rules, allowing the underlying AI model to apply contextual reasoning and adapt to different coding patterns and scenarios.
This hybrid design—combining instruction-driven guidance with flexible AI reasoning—ensures that the agent remains both accurate and adaptive, capable of delivering meaningful insights while operating effectively across diverse codebases.
Challenges we ran into
As we were new to agentic AI, the first major challenge was understanding how to design and structure an effective AI agent capable of autonomous yet controlled decision-making. This required a significant learning curve in terms of agent behavior, instruction design, and integrating AI reasoning into a development workflow.
Another key challenge was defining and implementing meaningful thresholds and parameters, particularly for detecting security vulnerabilities and evaluating code complexity. Striking the right balance between strict rule enforcement and flexible AI-driven analysis was critical to avoid both false positives and missed issues.
Additionally, we had to carefully explore and determine which GitLab tools and capabilities (such as repositories, merge requests, issue tracking, and CI/CD workflows) were necessary to enable the agent to perform its tasks effectively. Integrating these components into a cohesive system while ensuring smooth interaction between the AI agent and the development pipeline was a complex but essential part of the process.
Accomplishments that we're proud of
We successfully designed and implemented an AI-powered agent within GitLab Duo that goes beyond traditional static analysis by combining context-aware AI reasoning with rule-based validation. This allowed us to create a system capable of not only identifying code issues but also understanding their impact and suggesting meaningful improvements.
One of our key achievements was building a human-in-the-loop workflow, where the agent intelligently proposes changes, seeks approval, and then autonomously updates and merges code—ensuring both control and automation. We also integrated issue creation and tracking, enabling a seamless connection between code analysis and project management.
Additionally, we were able to incorporate security vulnerability detection and code complexity evaluation with configurable parameters, making the agent adaptable to different project needs. Despite being new to agentic AI, we developed a scalable and extensible architecture that can integrate with core GitLab features like merge requests, repositories, and issue tracking.
Overall, we’re proud of building a solution that demonstrates how AI can act as a practical, reliable collaborator in real-world software development workflows, improving both code quality and security.
What we learned
Through this project, we learned that building effective agentic AI systems requires a careful balance between structured rule-based guidance and flexible AI reasoning. Providing clear and well-defined instructions significantly improves the reliability and quality of the agent’s output. We also realized that setting appropriate thresholds is crucial for accurately detecting security vulnerabilities and code complexity without generating excessive noise. Additionally, integrating the agent seamlessly with development tools like GitLab is essential to ensure it delivers practical value within real-world workflows.
What's next for Security and Compliance AI
Going forward, we plan to enhance AI Agent by integrating CI/CD pipelines for automated code reviews. We aim to expand its capabilities with advanced security rule sets (e.g., OWASP standards) and more precise code quality metrics to improve accuracy and coverage.
Additionally, we will extend support for multiple programming languages and frameworks, making the agent more versatile. Ultimately, our goal is to evolve AI Agent into a fully autonomous, scalable AI assistant that continuously improves code quality, security, and developer productivity.
Built With
- claude
- gitlab
- gitlabduo
Log in or sign up for Devpost to join the conversation.