About the Project Inspiration Every developer knows the frustration of waiting hours (or days) for a code review. Merge requests pile up, reviewers are overloaded, and feedback quality varies wildly depending on who's available. As a CS student who's worked on team projects and contributed to open source, I've felt this bottleneck firsthand — it slows down the entire development cycle. I wanted to build something that doesn't replace human reviewers but acts as a first pass, catching the obvious issues instantly so that when a human reviewer sits down, they can focus on architecture and design decisions rather than spotting missing null checks or hardcoded secrets. What It Does The MR Review Accelerator is an AI-powered agent built on the GitLab Duo Agent Platform that automatically reviews merge requests. When triggered, it:

Reads the MR diff and analyzes code changes Flags potential bugs, logic errors, and edge cases Checks for security concerns (hardcoded secrets, injection risks, vulnerable patterns) Evaluates code quality — naming conventions, readability, complexity Suggests concrete improvements with code examples Posts actionable, line-specific feedback directly on the MR

How I Built It The project uses GitLab's custom agent and flow system, with Anthropic's Claude powering the reasoning behind the reviews. The architecture is a multi-agent flow where each agent handles a specific review concern (security, code quality, performance), and their outputs are combined into a single cohesive review comment. The agent definitions are written in YAML and leverage GitLab's built-in tools for accessing MR diffs, file contents, and issue context. Challenges

Keeping feedback actionable, not generic — The hardest part was prompt engineering the agents to give specific, line-referenced suggestions rather than vague advice like "consider improving readability." Handling large diffs — MRs with hundreds of changed lines required chunking strategies to stay within context limits. Reducing noise — Early versions flagged too many nitpicks. Tuning the agents to focus on what actually matters (bugs, security, real quality issues) took iteration.

What I Learned This hackathon pushed me into areas I hadn't explored before — AI agent orchestration, DevSecOps workflows, and prompt engineering at a practical level. I learned how to design multi-agent flows where each agent has a clear responsibility, and how to make AI output useful in a real developer workflow rather than just impressive in a demo.

Built With

  • antrhopic
  • flows
  • git
  • gitlab
  • gitlabapi
  • yaml
Share this project:

Updates