Inspiration
Every developer has faced the dreaded merge conflict — that moment when two branches collide and the codebase becomes a battlefield of <<<<<<< HEAD markers. In fast-moving teams, resolving these conflicts manually is slow, error-prone, and mentally exhausting. A wrong resolution can silently break production.We wanted to build something that makes this pain disappear. The idea was simple: what if an AI agent could look at a conflict the same way a senior developer would — understanding the context, the imported files, the dependencies — and just fix it? That vision became AI Merge Guardian, an intelligent system that detects and resolves GitHub merge conflicts automatically using a local Ollama LLM, with both a manual UI and a fully automated webhook agent mode.
What it does
AI Merge Guardian is a two-mode intelligent conflict resolution system:🖥️ MVP Mode (Streamlit UI):
Paste any GitHub repo URL and it parses branches and files automatically via the Git Trees API Fetches all three versions of a conflicting file — BASE (ancestor), your branch, and the incoming branch Detects conflicts using a 3-way diff engine and highlights every conflict block For each conflict, you can pick BASE, UPDATED, or a custom resolution — or click AI Suggest to get an environment-aware resolution powered by the local LLM Builds a clean merged file preview and lets you download it or open a Pull Request directly 🔗 Agent Mode (Webhook Automation):
A FastAPI server listens for GitHub Pull Request webhook events On every PR event, it fetches changed files and scans for conflict markers (<<<<<<< HEAD, =======, >>>>>>>) Sends the conflict blocks to Ollama (deepseek-coder:33b) with full context — imported files, dependency manifests, and surrounding code Validates the AI resolution using Python AST checks and node --check for JS/TS If validation fails, it retries automatically until a valid resolution is produced Posts the final AI-generated resolution directly as a comment on the GitHub PR
How we built it
The project is structured into clean, modular layers: Core Modules (src/):
conflict_detector.py — 3-way diff engine that detects conflicts using both GitHub metadata and raw marker scanning llm_agent.py — Ollama integration that builds environment-aware prompts including imported file context and dependency manifests github_api.py — Full GitHub REST API client for fetching PR files, branch heads, file contents, and posting PR comments mvp_merge_engine.py — Manual merge analysis engine powering the Streamlit workflow config.py — Environment and configuration management via .env
Application Layer (apps/):
streamlit_app.py — The MVP UI with repo URL parsing, file selection, conflict visualization, AI suggestions, and PR creation webhook_app.py — FastAPI webhook server with HMAC signature verification and full automated agent pipeline
AI Layer:
Local Ollama running deepseek-coder:33b — chosen for its strong code understanding and ability to run fully offline Prompts include: the conflict block, the base ancestor code, the updated branch code, imported file context, and the dependency manifest — giving the model maximum context for accurate resolution
Validation Layer:
Python AST parsing to verify syntactic correctness of Python resolutions node --check for JavaScript/TypeScript resolutions when Node.js is available Automatic retry loop if validation fails
Stack: Python 3.8+, Ollama (deepseek-coder:33b), FastAPI, Streamlit, GitHub REST API, Git Trees API, uvicorn, ngrok (local testing)
Challenges we ran into
3-way diff accuracy — Simple marker detection wasn't enough. We had to build a proper 3-way diff engine that compares the merge-base ancestor, the current branch, and the incoming branch to correctly isolate each conflict block without false positives. Context-aware prompting — Early LLM resolutions were generic and sometimes wrong because the model only saw the conflicting lines. We solved this by injecting imported file content and the dependency manifest into the prompt, which dramatically improved resolution accuracy. Validation and retry loop — The AI occasionally produced syntactically invalid code. Building a reliable validation pipeline with AST checks for Python and node --check for JS/TS, combined with an automatic retry loop, was complex but critical for production-level reliability. Webhook security — Implementing HMAC signature verification for the GitHub webhook to ensure only legitimate GitHub events are processed required careful handling of raw request bodies before JSON parsing. PR creation edge cases in Streamlit — The MVP PR creation flow (create branch → commit merged file → open PR) had several edge cases around token permissions and branch head resolution that required careful debugging. Migrating and restructuring for GitLab — Our project was originally GitHub-native. Restructuring it to align with the GitLab Duo Agent Platform model under hackathon time pressure was a significant challenge.
Accomplishments that we're proud of
Fully local AI pipeline — The entire conflict resolution runs on a local Ollama instance with no data leaving the machine, making it privacy-safe for enterprise use Environment-aware resolutions — The agent doesn't just look at the conflict block in isolation — it understands the surrounding imports and dependencies, making resolutions contextually correct Dual-mode architecture — Both a polished manual UI for demos and controlled use, AND a fully automated webhook agent for production pipelines — from the same codebase Validation with retry — The system never posts a syntactically broken resolution; it validates and retries automatically End-to-end automation — From a GitHub webhook firing on a PR event to an AI resolution comment appearing on the PR, with zero human involvement
What we learned
Prompt context is the most important variable — Including imported files and dependency manifests in the prompt transformed resolution quality from "sometimes useful" to "genuinely reliable." What the model knows about the surrounding code matters as much as the conflict itself. Local LLMs are production-viable for code tasks — deepseek-coder:33b via Ollama performed surprisingly well on real merge conflicts, proving that you don't need a cloud API for intelligent code resolution. Validation must be built-in, not bolted on — We initially treated validation as optional. Making it a core part of the loop (with retry) is what separates a demo from a real tool. Webhook-driven agents are the right architecture for developer tools — Event-driven design (PR opened → agent fires → comment posted) is natural, non-intrusive, and fits perfectly into existing developer workflows. FastAPI + Streamlit is a powerful combo — Using FastAPI for the automated agent mode and Streamlit for the interactive UI let us serve two very different use cases cleanly from the same codebase.
What's next for AI Merge Guardian
Native GitLab Duo Agent integration — Convert the webhook agent into a full GitLab Duo custom agent using agent.yaml, running natively on the GitLab Duo Agent Platform with built-in MR triggers Multi-file conflict orchestration — Build a Flow of agents where each file's conflicts are resolved in parallel by separate specialized agents, then merged into a single PR Language-specific resolution agents — Specialized sub-agents for Python, JavaScript, Go, and YAML that apply language-specific best practices during resolution Conflict history memory — Give the agent memory of how past conflicts were resolved in this repo so it learns team conventions over time Risk scoring — Before resolving, score each conflict block by severity and flag "Critical" conflicts for mandatory human review Cloud LLM fallback — Add an optional Claude (Anthropic) API fallback when the local Ollama instance is unavailable
Log in or sign up for Devpost to join the conversation.