Inspiration

As engineers, we understand the tedious and arduous task of debugging circuits. Staring at endless waveforms on a screen to find a single dead signal is exhausting. We wanted to create a product that makes the process more accessible by prioritizing natural human-computer interaction. Our goal was to build a true "Screenless Acoustic Hardware Debugger"—a tool that lets you keep your eyes and hands on your physical circuit while the system debugs it for you.

What it does

Rubber Duck is an AI-powered circuit debugging tool that turns hardware analysis into a sarcastic, spoken conversation. You connect a circuit to a ZedBoard FPGA, which captures the logic data. Within seconds, a deep-voiced AI verbally diagnoses your circuit and tells you exactly what is wrong—no oscilloscope or monitor required. It's traditional rubber duck debugging, but the duck actually talks back.

How we built it

We built a robust, hybrid Edge-to-Cloud architecture. For the edge hardware, the ZedBoard FPGA generates a Vivado ILA capture and exports it as a CSV to our edge device (the AMD Ryzen MiniPC). For our data pipeline, a Python backend running on the MiniPC uses watchdog to instantly detect new file exports. Because Vivado CSVs are notoriously messy, we use pandas to dynamically strip the metadata and shape the data into a clean dataframe. We then utilize deterministic heuristics. Instead of feeding raw, token-heavy CSV data directly to an LLM, our Python engine runs lightning-fast heuristic math to check for dead lines, clock pulses, and hung state machines. The resulting mathematical summary is packaged and sent via HTTP to a Llama 3 instance running via Ollama on a AMD Instinct Cloud GPU. The prompt forces Llama 3 to act as an abrasive, sarcastic hardware debugger. The LLM's text response is fed into Microsoft's edge-tts (Neural Text-to-Speech), generating a realistic audio file that is instantly routed through the Linux mpv drivers to a Bluetooth speaker.

Challenges we ran into

We initially planned to use an AMD Kria KR260 FPGA with an Analog Devices ADC Eval Board to interpret bit streams and input the full depth of the FIFO into an ESP32, but we ran into severe hardware bottlenecks. After team discussions and analyzing the feasibility of different pathways, we pivoted our architecture to avoid over-designing the hardware layer. On the software side, we battled 1:00 AM operating system glitches. We had to engineer a custom "file-lock" bypass because Windows fires multiple ghost events when saving large CSVs, which was crashing our watcher script. Finally, routing headless Linux audio drivers (PulseAudio) to output the AI's voice to a Bluetooth speaker required a deep dive into terminal audio packaging.

Accomplishments that we're proud of

Our team is incredibly proud of how quickly we executed a massive pivot to an alternate concept. We are especially proud of our hybrid architecture. We successfully utilized the AMD MiniPC as a lightweight edge device for real-time file watching and deterministic math, while offloading the heavy LLM inference to the AMD Instinct Cloud GPU . It perfectly simulates a real-world enterprise IoT environment.

What we learned

We learned that jumping into a demanding, design-heavy project with complex structural intricacies requires strict pre-planning, especially when designing state systems and ensuring components communicate properly. More importantly, we learned that original ideas can pivot entirely during a hackathon and still be reframed into an incredibly polished, functional product if you trust your team's engineering fundamentals.

What's next for Rubber Duck

The next step is to expand the deterministic math engine to analyze more advanced digital logic circuits and complex hardware protocols (like I2C and UART). We also want to move away from CSV exports entirely and build a custom driver that parses the raw bitstream from the FPGA in real-time, completely eliminating the Vivado software middleman for an even faster consumer experience.

Built With

Share this project:

Updates