Inspiration

In an age where generative AI has begun to dominate the software engineering field, bugs often arise from developers not fully understanding their AI-infested code. To make matters worse, debugging is already frustrating and isolating, especially when you’re stuck on a bug for hours. We were inspired by the idea of rubber duck debugging—explaining your code out loud to uncover mistakes—and thought, what if your duck could talk back?

What it does

Duck Duck debug is an AI-powered debugging assistant that listens as you verbally walk through your code. It pulls relevant context from your codebase, displays the related code on the side, and then asks guiding questions to help lead you toward a solution. It is not a debugger in the traditional sense of the word—while the ultimate goal is to help you fix bugs, Duck Duck Debug highly emphasizes helping you achieve a full understanding of your codebase, which makes you a better developer in the long run.

How we built it

The user's codebase is scraped and stored in a MongoDB database with an algorithm we wrote. Then, we used a Retrieval-Augmented Generation model (RAG): when the user talks to the model, we combine the user's input with the relevant context from the user's codebase to feed into the OpenAI API. To retrieve the correct code from the user's codebase for the combined query, we trained a BERT model on The Code/Natural Language Challenge (CoNaLa) dataset, which has natural language and corresponding code pairings. To bring the project together, we used a few APIs. To convert speech to text, we used OpenAI's Whisper. To allow the model to speak back to the user, we used ElevenLabs to convert the model's output from text to speech. And to create the templates for output, we used LangChain.

Challenges we ran into

Extracting the correct relevant blocks of code from the codebase with BERT that would be helpful to the user. Training the model to know when the bug is solved/the user understands the code. Sometimes, the model would stop prematurely, thinking the issue is solved when it isn't. Creating the UI and animations with react with limited experience. Prompt engineering for the RAG to combine the code context with the user's input.

Accomplishments that we're proud of

Fully functional voice-based debugging flow. Successfully writing algorithm to scrape codebase into a database. Successfully retrieving the correct blocks of code relating to the user's voice input to build context. A smooth animated transition from the landing page to the debug interface. Creating a debugging experience that’s actually
 not highly frustrating?

What we learned

How to train a BERT model. How to implement a RAG model. How to use React for animations. How to integrate APIs, code, machine learning, and UI together into one seamless pipeline.

What's next for Duck Duck Debug

Integrate with VS Code live for in-editor voice debugging. Expand the code understanding to handle larger, multi-file projects. Improve contextual memory to continue longer debugging conversations. Build a “quack mode” for fun affirmations and encouragement. đŸ„đŸ„đŸ„

Built With

Share this project:

Updates