Project Story: Green Code Guardian
Inspiration
Technical debt has traditionally been viewed as an engineering nuisance and a silent tax on developer velocity and maintainability. But as global internet traffic surges, I realized it is much more severe than that: technical debt is a carbon footprint.
Every bloated React/Svelte component, duplicate SVG, dead export, and unoptimized array iteration sent over the wire wastes bandwidth, burns mobile battery life, and forces servers to consume additional energy. I was inspired to bridge the gap between "Shift-Left Security" and "Green Engineering." I wanted to prove that cleaning up legacy code isn't just about saving developers from headaches; it is about actively reducing the environmental impact of software.
What it does
Green Code Guardian is an autonomous GitLab worker that hunts down technical waste and security flaws, refactors them, and quantifies the exact environmental savings.
Instead of waiting for an engineer to review code, the Guardian lives inside the CI/CD pipeline. When tagged in a GitLab Issue, it:
- Deep-scans the target files for anti-patterns (e.g., callback hell, bloated inline assets, missing null-safety).
- Performs a heavy architectural refactor (e.g., extracting inline SVGs into reusable modules to adhere to DRY principles, or masking hardcoded credentials).
- Autonomously force-commits the secure, optimized code to a new branch.
- Opens a Merge Request loaded with a "Sustainability Scorecard," calculating exactly how many grams of CO2 were saved by the optimization.
How I built it
I architected Green Code Guardian using GitLab's custom AI Flows (flow.yml) and the AI Catalog framework. I moved away from the concept of a "passive chatbot" and instead built a state machine using GitLab's OneOffComponent.
This created a forced-execution contract:
- The Analyzer: Reads the target files via the
read_filetool to identify vulnerabilities. - The Refactorer: Re-engineers the file to shift security left and optimize bundle sizes.
- The MR Creator: Uses the
create_commit,create_merge_request, andcreate_issue_notetools to autonomously push the changes, generate the Sustainability Scorecard, and link the operation back to the origin issue.
Challenges I ran into
The hardest technical hurdle was bypassing the boundaries of LLM context limits and API payload restrictions during the create_commit tool execution.
When dealing with massive monolithic files (like a massive 160-line frontend component), the agent would frequently attempt to rewrite the entire architecture at once. This massive payload caused the JSON formatting to break during the tool call, resulting in the agent successfully pushing the workflow forward but generating an empty "0 changes" Merge Request.
I overcame this by heavily iterating on our prompt engineering inside the flow.yml. I implemented strict multi-phase action constraints, hardcoded branch parameters, and taught the model to utilize targeted, incremental refactoring rather than attempting to rewrite massive files in one shot.
Accomplishments that I'm proud of
I am incredibly proud of proving that an AI agent can reliably manage a complete end-to-end SDLC software loop. Our agent doesn't just act as an automated linter; it behaves like a senior engineer.
Beyond the code, I successfully tied abstract engineering concepts (like DRY principles and bundle optimization) to tangible, real-world metrics (grams of CO2 saved). Seeing the agent successfully extract redundant logic, secure vulnerable APIs, and generate a mathematical sustainability scorecard in the Merge Request was a massive victory.
What I learned
I learned that constraint is power. Autonomous AI agents in the SDLC perform substantially better when bound to strict, narrow toolset contracts rather than open-ended conversations.
I also learned that "Green Metrics" actually change developer culture. Telling a developer "this file is messy" often goes ignored. Showing them the mathematical environmental impact changes behavior. I modeled the sustainability impact using the following equation:
Let $\Delta B$ be the total reduction in bundle size (in kilobytes), $V$ be the projected number of page views/executions, and $E$ be the energy consumed per kilobyte transfer. The carbon footprint reduction ($S_{CO_2}$) multiplied by the grid's carbon intensity ($C_{intensity}$) is:
$$ S_{CO_2} = \Big( \Delta B \times V \times E \Big) \times C_{intensity} $$
Quantifying that a simple 1.4KB reduction translates to tangible grams of CO2 saved out of every 1,000 views proved my hypothesis: efficient code is sustainable code, and sustainable code is profitable code.
What's next for Green Code Guardian
The next step is integrating the guardian directly into the pre-commit webhook layer to act globally across the entire monorepo automatically, rather than requiring an explicit user trigger via GitLab issues. I also plan to train custom embedding models specifically on calculating energy costs for cloud infrastructure configurations (like Kubernetes manifests and Terraform scripts), allowing the Guardian to optimize infrastructure code for carbon savings just as efficiently as it curates application logic.
Built With
- ai-catalog-framework
- gitlab-ai-flow-(flow.yml)
- gitlab-api
- gitlab-duo
- gitlab-issues
- gitlab-merge-requests
- llm-(large-language-models)
- node.js
- svelte
- typescript
- yaml


Log in or sign up for Devpost to join the conversation.