About the Project

https://gitflow-ai-836641670384.us-west1.run.app

https://gitlab.com/gitlab-ai-hackathon/participants/35450504

💡 Inspiration

Merge conflicts and bottlenecked merge queues are the bane of every developer's existence. In GitFlow AI v1, we explored how AI could assist with basic Git operations and code reviews. However, we realized the biggest friction point in modern CI/CD pipelines occurs when multiple pull requests target the same branch.

When one PR merges, it often invalidates the others, leading to a cascading chain of manual rebases and conflict resolutions. We were inspired to build AI Merge Queue (GitFlow AI v2) to completely automate this process. We wanted a system that doesn't just queue pull requests, but actively syncs repositories (GitHub to GitLab), visualizes the Git tree in real-time, and uses AI to intelligently resolve complex merge conflicts without human intervention. The goal was to eliminate the "merge train delay" where developers wait hours just to resolve a trivial import conflict.

⚙️ How we built it

We architected GitFlow AI v2 as a highly scalable, full-stack, event-driven application designed to handle complex Git topologies:

  • Frontend & Visualization: Built with React 18, TypeScript, and Tailwind CSS. We designed a custom CLI Terminal interface that mimics a real developer environment. To provide spatial awareness of the repository, we integrated @gitgraph/js to render a Live Git Graph, giving users real-time visual feedback of their repository's state as branches are created, rebased, and merged.
  • Backend & Real-time Streaming: We utilized Node.js and Express to build the core API. Because Git operations (like cloning a large repository or performing a cascading rebase) are long-running processes, standard HTTP requests would time out. We solved this by implementing Server-Sent Events (SSE) to stream live Git execution logs directly from the Node.js child processes to the browser terminal in real-time.
  • State Management & Event Dispatching: We utilized custom React event dispatchers (window.dispatchEvent) and SSE streams to maintain the global state of the merge queue. This decoupled architecture allows the UI to synchronize with background Git operations seamlessly, updating the terminal and the graph without requiring page reloads or blocking the main thread.
  • AI Conflict Resolution Engine: Powered by Google Gemini 3.1 Pro. When a conflict is detected during a cherry-pick or rebase, the system extracts the Git diffs, the conflict markers (<<<<<<<, =======, >>>>>>>), and the surrounding file context. It feeds this into the LLM with a strict system prompt. The AI analyzes the Abstract Syntax Tree (AST) context to perform semantic merges rather than just textual ones, ensuring that logic isn't broken during the resolution.
  • AI Token Optimization & Cost Efficiency: To make the system cost-effective and blazing fast, we engineered the sync workflow to act as a lightweight Git wrapper. It bypasses the AI model entirely for clean cherry-picks and fast-forward merges. Gemini is invoked if and only if a Git conflict is detected, saving thousands of tokens per sync cycle.
  • Decoupled Code Review: We separated the code review process from the automated sync. Developers can now manually trigger comprehensive architectural reviews for specific commit ranges using the new git-ai review <range> command, allowing for on-demand AI auditing.

To quantify the efficiency of our system, consider the expected time to merge $n$ pull requests. Traditionally, the expected time $E[T_{manual}]$ is modeled as:

$$ E[T_{manual}] = \sum_{i=1}^{n} \left( t_{review} + P(C_i) \cdot t_{resolve} \right) $$

Where $P(C_i)$ is the probability of a conflict for the $i$-th PR, and $t_{resolve}$ is the manual resolution time. By introducing our AI Merge Queue, we reduce $t_{resolve} \to \epsilon$ (where $\epsilon$ is near zero), transforming the time complexity of the merge queue from a blocking, linear human effort into a highly parallelized automated pipeline:

$$ E[T_{AI}] \approx \sum_{i=1}^{n} t_{review} + \mathcal{O}(\epsilon) $$

🚧 Challenges we faced

  1. Real-Time State Synchronization: Keeping the Dashboard, Live Git Graph, and CLI Terminal perfectly in sync while background Git operations were running was incredibly difficult. We had to implement a robust custom event dispatcher and leverage SSE streams to ensure the UI reacted instantly to backend changes without race conditions.
  2. Streaming Git Output: Standard HTTP requests would time out during long-running Git clones or cherry-picks. We had to pivot to Server-Sent Events (SSE) to stream the terminal output chunk-by-chunk to the frontend, which required careful handling of buffer streams in Node.js.
  3. Optimizing AI Costs and Latency: Running every single commit through an LLM was too slow and expensive. We had to refactor our backend to act as a lightweight Git wrapper that only escalates to the AI model when standard Git operations (like cherry-picking or rebasing) fail with a conflict.
  4. AI Hallucinations in Code: Early iterations of the AI conflict resolver would sometimes generate syntactically invalid code, hallucinate variables, or remove necessary imports. We overcame this by upgrading to Gemini 3.1 Pro, refining our prompt engineering to enforce strict JSON outputs, and providing the AI with surrounding file context, not just the isolated conflict block.

🧠 What we learned

  • Git Internals: We gained a profound understanding of Git's underlying architecture—how blobs, trees, and commits interact, how the index works during a conflict, and how to programmatically manipulate them via APIs and child processes.
  • Event-Driven UI: We learned how to decouple UI components using custom window events, allowing our Terminal to listen to background sync processes globally without forcing the user to stay on a specific page.
  • Applied AI & Token Economics: We learned that LLMs are incredibly powerful for code manipulation, provided they are given strict boundaries and sufficient context. More importantly, we learned that optimizing when to call the AI (token optimization) is just as critical as the prompt itself for building production-ready AI tools.

🚀 What's next for GitFlow AI v2

We plan to expand our AI Merge Queue to support more version control platforms (like Bitbucket), introduce automated test-running during the AI resolution phase to guarantee the AI's merged code compiles before committing, and package the tool as a native GitHub App/GitLab Integration for seamless enterprise adoption.

Built With

  • googleaistudio
Share this project:

Updates