Inspiration
Every software team has the same silent war happening daily.
The PM writes a ticket: "Add the love connection thing — make it magical." The dev opens it, stares, and thinks: "What database schema? REST or WebSocket? What does magical mean in an acceptance criterion?"
Three Slack threads, two meetings, and one misbuilt feature later — the sprint is over and nobody is happy.
We've lived this. The gap between product thinking and engineering execution isn't a people problem — it's a translation problem. And we built LoopAgent to be that translator.
What it does
LoopAgent is a GitLab AI agent that sits between your PM and your dev team — automatically acting on every new issue the moment it's opened.
Here's the full loop:
- PM opens a GitLab issue — in plain language, no technical spec required
- LoopAgent responds instantly — posts a comment: "I'm on it!"
- Posts an implementation plan — scope, approach, exact files it will touch — before writing a single line of code, so both PM and dev can course-correct early
- Generates code scaffolding — implementation file + test file, using Claude claude-opus-4-6 with adaptive thinking
- Creates a branch and commits — all files in one atomic commit
- Opens a Draft MR — with a detailed description, review notes, and auto-assigns the most relevant reviewer based on git history
- Labels the issue
in-progressautomatically - Listens for feedback — if a reviewer comments on the MR, LoopAgent reads it, updates the code, and pushes a new commit. Automatically.
Slash commands let anyone control the agent from inside GitLab comments:
/loop regenerate— start fresh with new scaffolding/loop explain— post an analysis without writing any code/loop tests-only— generate only test files/loop skip— opt this issue out
A /dashboard endpoint tracks issues processed, MRs created, success rate,
and average generation time in real time.
How we built it
Core stack:
- Flask — lightweight webhook server, returns 200 instantly and runs the full pipeline in a background thread (no GitLab timeouts)
- Anthropic Claude claude-opus-4-6 — code generation with adaptive thinking enabled; streaming output so generation is visible in server logs
- python-gitlab — all GitLab operations: branch creation, atomic multi-file commits, draft MR creation, issue comments, labels, reviewer assignment
- GitLab Webhooks — triggers on
issue,note(comments), andmerge_requestevents
Architecture decisions:
- Async pipeline in daemon threads — webhook responds in <100ms, pipeline runs in ~30s
- Keyword-based codebase indexer — fetches the full repo tree, scores 88+ files by relevance to the issue title/description, picks the top 10 for context
- Multi-file atomic commits — implementation + tests committed together in one SHA
- Bot self-ignore — detects its own comments using the project token pattern to prevent infinite loops
- Retry logic — handles GitLab 502 transient errors and concurrent branch write conflicts
loopagent.ymlsupport — projects can define language, test framework, folder structure, and style guide; the agent respects these conventions
Challenges we ran into
- GitLab webhook timeouts — GitLab marks webhooks as failed if they don't respond in time. Solved by returning 200 immediately and running the pipeline async.
- Bot infinite loops — when LoopAgent posts a comment, GitLab fires a note event
back at the webhook. We had to detect our own project access token username pattern
(
project_{id}_bot_) to silently ignore self-generated events. - Concurrent branch write conflicts — running multiple slash commands simultaneously
caused git
400: reference does not point to expected objecterrors. Fixed with retry logic that re-resolves file actions against the latest branch state. - JSON parsing from LLMs — Claude occasionally wraps JSON in markdown fences. Built a two-pass parser: direct JSON parse → regex extraction fallback.
- macOS sandbox permissions — the venv
pyvenv.cfgfile was blocked bycom.apple.provenanceextended attributes in sandboxed environments.
Accomplishments that we're proud of
- Full loop in 30 seconds — from issue open to Draft MR with 2 committed files, reviewer assigned, and 3 comments posted on the issue
- MR refinement loop works — a reviewer comments "add error handling" → LoopAgent reads it, updates the code, commits, replies. Zero human intervention.
- Bot never responds to itself — solved a genuinely tricky distributed systems problem: an agent that receives events caused by its own actions
- Slash commands feel native —
/loop explain,/loop regeneratework exactly like you'd expect a GitLab bot to behave - The PM-to-dev translation story — we didn't just build a code generator. We built something that makes vague requirements legible to engineers and makes technical changes legible to product managers — at the same time.
What we learned
- Async-first is non-negotiable for webhook servers — any synchronous pipeline will eventually timeout under real load
- LLM output needs two-pass parsing — never trust that the model will return clean JSON; always have a regex fallback
- GitLab's event system is powerful but tricky — "Work item events" replaced "Issues events" in newer GitLab versions; comment events fire for bot comments too
- The agent's personality matters — the "thinking out loud" plan comment before writing code dramatically increases trust. Users need to see the agent's intent before it acts.
- Prompt engineering > model size — a well-structured prompt with explicit JSON schema constraints outperforms a bigger model with a vague prompt every time
What's next for LoopAgent
loopagent.ymlfull adoption — let every project define its own conventions; the agent adapts its output to match your team's style guide automatically- Multi-repo awareness — for microservice architectures, understand changes needed across multiple repositories for a single feature
- Slack/Teams integration — post MR summaries to the team channel so PMs get notified in the tools they already use
- Learning from merges — when a human merges or closes an MR, feed that signal back so the agent improves its scaffolding patterns over time
- GitLab CI integration — after the MR is opened, monitor the pipeline; if tests fail, automatically diagnose and push a fix commit
- Voice-to-issue — PM records a 30-second voice note describing a feature; LoopAgent transcribes, creates the issue, and starts the pipeline automatically
Log in or sign up for Devpost to join the conversation.