Inspiration
During group projects and hackathon work, we repeatedly faced merge requests failing for small but frustrating reasons—broken README files, unclear descriptions, or failed pipelines that were easy to fix but time-consuming to review. These issues distracted reviewers from real code quality. We wanted an AI system that could act like a careful teammate and handle these repetitive checks automatically.
What it does
AI Merge Risk Guardian reviews merge requests and evaluates how risky they are before merging. It analyzes file changes, pipeline status, and merge request metadata, then categorizes the risk level. For safe and predictable issues like documentation corruption, it automatically creates a fix branch and a new merge request, while clearly commenting on the original MR. This ensures faster reviews without compromising safety.
How we built it
I built the project using GitLab’s AI Flow framework. The system is designed as a multi-agent workflow:
A risk analysis agent evaluates the merge request and assigns a risk level. An auto-fix agent is triggered only for safe cases and applies controlled fixes. All actions—reading files, creating branches, committing changes, and opening merge requests—are done using GitLab’s native tools to ensure reliability and traceability.
Challenges we ran into
One of the biggest challenges was working with strict YAML schema validation in GitLab AI flows. Small syntax mistakes could break the entire pipeline. Designing the auto-fix logic safely was also challenging—we had to ensure the agent never modified application or business logic. Debugging pipeline failures and aligning agent behavior with real-world DevOps expectations took significant effort.
Accomplishments that we're proud of
Successfully built a working AI-powered merge request reviewer Automated safe fixes without human intervention Implemented clear risk labeling and explanations Integrated AI decisions directly into GitLab’s merge request workflow Ensured all fixes remain transparent and reviewable
What we learned
I learned how AI can be responsibly integrated into developer workflows when strict boundaries are enforced. The project taught us the importance of prompt design, validation checks, and safety rules. We also gained hands-on experience with CI/CD debugging and designing AI systems that support humans instead of replacing them.
What's next for AI Merge Risk Guardian
I plan to expand the system to suggest test improvements, provide deeper pipeline failure insights, and support customizable risk rules per project. In the future, AI Merge Risk Guardian could become a trusted assistant that continuously improves code quality and developer productivity across teams
Built With
- ai
- ci/cd
- claude
- duo
- flows
- yaml
Log in or sign up for Devpost to join the conversation.