Inspiration...

Initially, it was the track of “Flicker to Flow” that pushed us to come up with this idea. The role of a software developer has changed a lot over time, especially in the last couple of years. It is most important now than ever for developers to know how to use AI for their benefit, and most definitely, for their productivity so that they can reach flow state. Our team has experienced many a hackathons where we have suffered from heinous merge conflicts. When you introduce a larger codebase and a team of developers working on the same product for months on end, and even on the same feature, things can get messy quick. Merge conflicts are a common point of struggle for developers of all levels, and yet there hasn’t been a product that thoroughly analyzes, diagnoses, and cures them. So, our product, M. C. Hammer aims to do just that (that’s a wordplay on the actual singer – MC=merge conflict – we hammer merge conflicts), all while being as accessible as an extension to your IDE and as good of a copilot as Amelia Earhart.

What it does

MC-Hammer begins by activating when it detects a merge conflict within the active repository. Its next step is to analyze what might have caused a merge conflict by referring to logs in the Git history, pre- and post- merge files, and generating a dependency graph of the areas of the repository that the conflict touches most. Then, our pipeline consisting of multiple AI models, WebSocket communication, user confirmation, and thorough solution testing begins to solve the merge conflict. After a careful diagnosis of the problem and ensuring to let the user know of what the program is doing every step of the way, MC-Hammer presents a solution, which the user will then accept or reject.

How we built it

We built MC-Hammer across four major systems that had to work together in real time. The foundation was a dependency graph built with NetworkX to map relationships between files and functions involved in a merge conflict. We paired this with a React Flow visualization that renders the graph live, so the state of the conflict is never a black box. When a user interacts with a node, the UI sends a command back to the extension over a second WebSocket, which resolves the function label to an exact file and line number and jumps the editor there directly. From there, we linked the graph to an AI pipeline running on the Asus Ascent GX10 supercomputer we were lucky to get for the hackathon. The pipeline runs in three stages: a merge solutions creator that analyzes the conflict and proposes fixes, a test case creator that generates expected outputs for each proposed solution, and a feedback loop that evaluates results and refines the approach. Each stage is informed by the dependency graph so the AI understands the full scope of what it is touching. The actual extension within the IDE is the center of MC-Hammer. When the user hits Hammer Time in the status bar, the extension runs git diff to find conflicted Python files, scans them for conflict markers, and identifies the exact functions involved. It then gathers three pieces of context for the target file: the current content, the remote main version pulled from git's merge index, and the latest commit message. That full payload goes to the AI pipeline over WebSocket. Simultaneously, the extension spins up the dependency graph, plays a startup animation, and opens the visualization within simple browser. On the other end, when the pipeline responds with generated test cases, the extension parses the payload, writes a temporary Python test runner to the workspace, opens it in the editor, and executes it in the MC Hammer terminal. Finally, the whole system leans heavily on multithreading to keep the pipeline, the extension, the two WebSocket servers, and the visualization from blocking each other. Getting all of those systems talking without deadlocking was its own challenge.

Challenges we ran into

MC-Hammer was one of the most ambitious projects any of us had attempted at a hackathon, and the challenges matched that ambition. None of us had built a VS Code extension before. Learning the extension API, understanding how terminals, webviews, status bars, and WebSocket servers all coexist inside a single extension, and debugging in an environment we had never touched took a significant chunk of our first hours. There was a lot of reading documentation at 3am. Getting our Asus supercomputer online was its own battle. Connecting it to the venue hotspot proved unreliable. Coordinating the four pipelines - merge solutions creator, the test case generator, the feedback loop, and the dependency graph across two separate repos, with real-time WebSocket communication tying them together, meant that integration issues compounded quickly. A message format mismatch or a timing issue in one pipeline could silently break everything downstream. About 15 hours in, we had an honest conversation about scope. The original vision was even larger, and we made the call to cut and refocus so we could actually ship something that worked end to end. That was the right call, but it cost us time and energy to realign the team mid-sprint. We worked all 36 hours. MC-Hammer earned its name.

Accomplishments that we're proud of

On the product side, instead of a wall of conflict markers and a git log you have to manually dig through, you get a live dependency graph, an AI diagnosis, and ranked solutions to accept or reject. That jarring moment where your momentum dies becomes a decision, and then you're moving again. The dependency graph is what makes the AI actually useful here. Agents are powerful but context-blind by default. They don't know your architecture or which functions upstream are about to break. Our graph gives the model that scope, exactly which functions are conflicted, who calls them, what the blast radius looks like, so the fixes it generates are grounded in the actual codebase rather than just the lines that conflicted. Running the pipeline locally on the Ascent GX10 changed what the product could be. No API latency, no data leaving the machine. The three-stage pipeline ran fast enough to feel like part of the IDE rather than something bolted onto it, which matters a lot for a tool that's supposed to keep you in flow rather than pull you further out of it. We did all of this in 36 hours on under an hour of sleep each, with tools none of us had used before. The graph accurately traces user-defined functions through codebases with 10k+ commits. That part we're genuinely proud of.

What we learned

We are ambitious when it comes to hackathon projects, and MC-Hammer reminded us that ambition has a real cost measured in hours. We were genuinely coding for all 36 of them. The challenges we faced reinforced that good teammates matter more than good tools. When you are low on sleep and lower on patience, the only thing keeping the project moving is everyone remembering they want the same thing. Technically, this project pushed all of us. We learned unfamiliar tools under pressure, wrote code we are actually proud of, and made the harder choice to think through problems ourselves even when copy-pasting from an LLM would have been faster and easier. That discipline showed up in the final product.

What's next for MC Hammer

We want to take MC-Hammer to enterprise. The core insight driving what comes next is that our AI models get better the more they know about a codebase. A system that has seen your architecture, your patterns, and your history of conflicts becomes a genuinely useful copilot rather than a generic one. The next step is building a vector database to give MC-Hammer long-term memory of a codebase. Over time, it would recognize recurring conflict patterns, understand team conventions, and resolve merges with context that no one-shot model currently has. The more it works alongside a team, the sharper it gets.

Built With

Share this project:

Updates