What it does
GreenCode Optimizer is a multi-agent AI system built on the GitLab Duo Agent Platform that automatically analyses every merge request for energy-inefficient code patterns, generates working green refactorings with real evidence citations, and calculates the environmental and financial cost of not fixing them.
When a developer opens a merge request and mentions @ai-greencode-optimizer, three specialised agents run in sequence:
Energy Profiler Scans the MR diff for 16 energy anti-patterns across Python, JavaScript, TypeScript, and Terraform. After finding a pattern, it greps the entire repository to find every call site and calculates the amplified daily energy cost.
Optimization Engine Reads the original source code for each finding, generates a working before/after refactoring, and cites real-world evidence from the Green Software Foundation, Mozilla Web Sustainability Guidelines, Django documentation, and AWS Well-Architected Framework.
Impact Reporter Calculates a weighted sustainability score (A–F) across four dimensions (Algorithm Efficiency, Memory Patterns, I/O Efficiency, Network Patterns), formats a comprehensive report, and posts it as an MR comment. If any finding is CRITICAL, it automatically creates a follow-up GitLab issue.
The project also includes a standalone CLI scorer that works without GitLab developers can run python scripts/green_scorer.py --file mycode.py to get instant feedback with no setup required.
Our Key Differentiator: Cross-File Amplification Analysis
Most code analysis tools examine files in isolation. GreenCode Optimizer does not.
After detecting an anti-pattern like an O(n²) search, the Energy Profiler greps the entire repository for all call sites. If that function is called inside a cron job running every 60 seconds, the real cost isn't O(n²) it's O(n² × 1,440 calls/day).
In our demo repository, a simple nested loop in order_processor.py is called by batch_handler.py 1,440 times per day. GreenCode reveals this costs 14,600 kWh/year equivalent to driving from London to Mumbai. A single-file scanner would miss this entirely.
How we built it
Platform: GitLab Duo Agent Platform 3 custom agents + 1 custom flow, all published as Public in the AI Catalog.
AI Model: Anthropic Claude (built into GitLab Duo all agents use it by default).
Architecture Pattern: Sequential Pipeline (also known as Pipes and Filters). Each agent processes the output of the previous agent in a deterministic sequence. Additional patterns used: ReAct (tool-calling reasoning loops within each agent), Generator-Critic (Agent 2 validates refactorings before outputting), and Human-in-the-Loop (report posted as MR comment for developer review).
Carbon Data: Multi-source fallback chain UK National Grid Carbon Intensity API (real-time, free, no auth required) → emissions.dev → Electricity Maps → hardcoded EEA/EPA averages. The primary UK API returns real production data with no sandbox disclaimers.
Methodology: Aligned with the Green Software Foundation Software Carbon Intensity (SCI) specification (ISO/IEC 21031:2024). Same methodology used by Google Cloud Carbon Footprint and Microsoft Azure Emissions Impact Dashboard.
Anti-Pattern Detection: 16 anti-patterns across 4 categories (Algorithm Complexity, Network & I/O, Memory & Allocation, Infrastructure). 13 green pattern rewards across 4 categories (Caching, Efficient I/O, Memory Efficiency, Efficient Data Access). All detection is regex-based for speed and reliability no external ML models or AST parsing required.
Testing: 34+ automated tests covering anti-pattern detection, green pattern rewards, scoring formula accuracy, CLI argument handling, and API fallback behaviour.
Challenges we ran into
The biggest challenge was the GitLab Duo custom flow YAML schema. The flow registry v1 specification requires specific top-level fields (version, environment, prompts) that weren't immediately obvious from the documentation. We debugged schema validation errors by studying GitLab's foundational flows and the flow registry documentation, eventually getting the correct structure with version: "v1", environment: ambient, and inline prompt definitions.
Cross-file analysis was also challenging the Energy Profiler needs to grep the entire repository for call sites after finding an anti-pattern, but the grep tool returns raw text that needs careful parsing to extract file paths, line numbers, and calling context. We solved this with structured output formatting in the system prompt.
What we learned
Building on the GitLab Duo Agent Platform taught us that the quality of system prompts IS the quality of the product. Unlike traditional software where you write code, here you write instructions and the precision of those instructions determines whether the agent catches 3 patterns or 16. We also learned that the Sequential Pipeline pattern maps perfectly to GitLab's flow architecture, and that cross-file analysis (using grep + read_file) is a powerful differentiator that most single-agent tools miss.
We also gained deep familiarity with the Green Software Foundation's SCI specification and real-world carbon intensity data sources. The variation in grid carbon intensity between regions (France at 52 gCO₂/kWh vs India at 708 gCO₂/kWh) means the same code has vastly different environmental impact depending on where it runs.
What's next for GreenCode Optimizer
v1.1: Expand to 44+ patterns including database-specific waste (unindexed queries, SELECT *), frontend energy patterns (unoptimised images, bundle bloat), and concurrency issues (thread creation in loops, missing timeouts).
v2.0: Add AST parsing (tree-sitter) for deeper analysis dead code detection, recursive functions without memoisation, and type-aware optimisations. Integrate with GitLab's Knowledge Graph for codebase-wide dependency analysis.
v3.0: Runtime profiling integration actual CPU measurements from CI test runs, carbon-aware CI scheduling (run pipelines when the grid is greenest), and team-level sustainability dashboards with trend tracking over time.
Built with
- GitLab Duo Agent Platform (Custom Agents, Custom Flows, AI Catalog, AGENTS.md)
- Anthropic Claude (via GitLab Duo)
- Python 3.11+
- UK National Grid Carbon Intensity API
- Green Software Foundation SCI Specification (ISO/IEC 21031:2024)
Try it
No setup needed: Clone the repo and run python scripts/green_scorer.py --file demo/src/services/order_processor.py
**README.md file has all commands that needed.
With GitLab: Mention @ai-greencode-optimizer in any MR comment to get a full sustainability report.
Built With
- antropic
- flows
- gcp
- gitlab
- gitlab-duo-agent-platform
- green-software-foundation
- python
- uk-carbon-intensity-api

Log in or sign up for Devpost to join the conversation.