Inspiration
AI development doesn't fail loudly. It drifts silently.
We discovered this the hard way. During a Cloudflare Workers project, we introduced a CMS using what seemed like the safest approach—building the new system in a separate folder to avoid breaking existing functionality.
Nothing broke. But everything drifted.
Over weeks, the system degraded:
- Multiple implementations of the same feature appeared
- Workarounds emerged when constraints surfaced (CMS size limits)
- Data flows became inconsistent across components
- A simple client request became impossible to implement
Recovery took a week. Debugging continues.
The problem wasn't bad code—it was invisible misalignment. Tests passed. Code ran. But the system had silently diverged from its intended design.
That's when we realized: AI accelerates development, but it also accelerates drift. We needed a way to detect misalignment before it becomes a crisis.
What it does
DriftGuard is a governance layer for AI-driven development that validates alignment, not just correctness.
While traditional tools ask "does the code work?", DriftGuard asks "is this still the right implementation?"
It monitors three critical dimensions:
Structural Drift — Code written in unauthorized locations or violating architectural boundaries Intent Drift — Implementation no longer matches original feature requirements Data Drift — Frontend outputs diverging from backend/CMS source-of-truth
DriftGuard transforms feature briefs into enforceable contracts, then continuously validates changes against those contracts—surfacing misalignment before merge, before production, before costly rework.
The result: teams can move fast with AI assistance while maintaining system coherence.
How we built it
Development approach:
- Claude Code (via VS Code) for implementation and architecture
- ChatGPT for concept refinement, product framing, and narrative development
- Model Context Protocol (MCP) for intelligent context management
MCP integrations:
- NotebookLM MCP — Extracts requirements from feature briefs and grounds them into enforceable contracts
- Context7 MCP — Injects structured codebase context for accurate drift detection
- Sequential Thinking MCP — Applies stepwise validation logic to analyze changes systematically
Technical architecture:
- Contract Generator — Parses feature briefs (Markdown) into validation rules using pattern matching and constraint extraction
- Drift Validator — Analyzes commit diffs and MR changes against contracts to detect structural, intent, and data drift
- Report Generator — Produces actionable reports for terminal output and GitLab MR comments
Platform integration:
- Designed as a GitLab Duo custom flow for seamless CI/CD integration
- Deployed on Google Cloud Run with Cloud Storage for contract persistence
- Tested against real GitHub and GitLab repositories to validate production readiness
MVP philosophy:
- Zero external Python dependencies (stdlib only)
- Focused on solving one real-world problem exceptionally well
- Built for extensibility without overengineering the initial solution
Challenges we ran into
1. Reframing the problem
The biggest challenge wasn't technical—it was conceptual.
Most tools validate correctness (does code run? do tests pass?), but we needed to validate alignment (does this still match our intent?). This required shifting from binary pass/fail thinking to continuous coherence monitoring.
2. Defining "drift" clearly
Drift is inherently fuzzy. We had to distill it into three concrete, measurable categories (structural, intent, data) that developers could understand and act on—without creating alert fatigue.
3. Balancing automation with trust
Too much autonomy and the system becomes unpredictable. Too little and it becomes noise. We calibrated DriftGuard to flag misalignment without blocking progress, giving teams visibility without becoming a bottleneck.
4. Avoiding overengineering
It's tempting to build complex multi-agent systems. Instead, we focused on demonstrating one clear failure case (our CMS migration) and solving it exceptionally well. This constraint forced clarity.
5. Making it actionable
Detecting drift is only valuable if developers can fix it. We designed reports to explain what drifted, why it matters, and where to look—turning abstract misalignment into concrete next steps.
Accomplishments that we're proud of
✅ Named the invisible problem — Defined "drift" as a distinct failure category in AI-assisted development, separate from bugs or test failures
✅ Solved a real problem — Built DriftGuard in response to an actual project failure, not a theoretical concern
✅ Created a clear taxonomy — Distilled drift into three understandable, measurable types (structural, intent, data)
✅ Shifted validation paradigm — Demonstrated that alignment validation is as critical as correctness validation in modern development
✅ Designed for GitLab's future — Built specifically for GitLab Duo Agent Platform, showcasing how custom flows can add governance to AI-driven workflows
✅ Achieved zero-dependency MVP — Delivered a working prototype using only Python stdlib, proving the concept before adding complexity
✅ Bridged technical and narrative — Created a solution that resonates with both developers (concrete drift detection) and decision-makers (risk mitigation)
What we learned
Modern development failures are gradual, not sudden.
Systems rarely break catastrophically. Instead, they slowly become misaligned—one small compromise at a time, one workaround at a time, one "temporary" solution at a time.
AI accelerates drift exponentially:
- More code generated faster → less human review per line
- More changes in parallel → harder to track system-wide coherence
- More complexity introduced → easier for intent to get lost in implementation
The result isn't failure—it's silent degradation.
The real need isn't better code generation. It's visibility into how systems evolve.
DriftGuard exists to provide that visibility. Not by preventing change, but by making misalignment visible before it becomes irreversible.
We learned that the next frontier in AI-assisted development isn't speed—it's coherence at scale.
And that governance tools shouldn't slow teams down. They should give teams confidence to move faster because they can see when they're drifting.
Notes for Submission
Key differentiators to emphasize:
- Solves a real, painful problem (not a hypothetical)
- Introduces a new problem category ("drift" vs "bugs")
- Complements rather than replaces existing tools
- Designed specifically for GitLab Duo Agent Platform
- Demonstrates MCP integration in production context
- Balances automation with developer control
Hackathon categories:
- Anthropic: Uses Claude via GitLab Duo Agent Platform + MCP tools
- Google Cloud: Built on Cloud Run + Cloud Storage
- GitLab: Custom flow for GitLab Duo Agent Platform
Call to action: DriftGuard turns AI's biggest weakness—speed without oversight—into its greatest strength: fast iteration with continuous alignment.
Built With
- agent
- anthropic
- api
- claude
- cloud
- comment-posting)-gitlab-flows-(workflow-orchestration)-**core-technologies:**-python-standard-library-(dataclasses-for-modeling
- container
- context
- context7
- docker
- duo
- flows
- gcp)
- gitlab
- issue-fetching
- json
- library
- manager
- markdown
- mcp
- mcp)
- model
- notebooklm
- platform
- protocol
- pytest
- python
- registry
- run
- secret
- sequential
- standard
- storage
- thinking
- v4
Log in or sign up for Devpost to join the conversation.