Inspiration

Working as software engineers, we regularly had to resolve merge conflicts at work which is straight up time consuming and tricky. We wanted to build an AI assistant which could suggest the best solutions from the provided context and make developers more productive and their code less prone to errors arising while resolving code.

What it does

Fetches the latest code from the remote repo and checks if merging our local changes would lead to a conflict. If any merge conflicts exist, it sends them to Gemini with the required context and asks for a solution. Gemini's provided solution is highlighted and displayed at the place of the conflicted code

How we built it

We integrated Gemini straight into VS Code by building an extension.

Challenges we ran into

Not sure about what should be the next best step to improve performance. At different points we found ourselves questioning if we had chosen the right prompt, whether the problem was solvable by a LLM, should we give the model more context, should we start a to-fro conversation between the model so that it can check it's response, would it be better if we had a fine tuned model etc etc. Exploring all possibilities is time consuming.

Could not get a consistent response from Gemini in our desired format.

Accomplishments that we're proud of

Can directly get Gemini's output into any code file. Were able to get meaningful results for basic git conflicts - still figuring out how to deal with the complex ones!

What we learned

Working with LLMs is tricky. We need to be careful if we expect exact and consistent responses from a model in a format of our choice. Choosing the right prompt goes a long way. LLMs work well with a:b :: c:d type data inputs - however it can be tricky to generate such example pairs Benchmarking the performance of a LLM based application against alternatives is a challenging but necessary task

What's next for Resolve.AI

Iterate on our V1 version to make it perform better Send more content in our request to Gemini (more lines of code around the conflict, perhaps entire files) and see if it improves the model. Capture how the response was treated by the user and use that as a feedback loop for Gemini. Benchmark it's performance against existing SoA custom built models such as MergeBERT (by Microsoft) We are interested in directly interfacing Gemini with all programming workflows and in seeing what it can do!

Built With

Share this project:

Updates