Inspiration
We’ve worked with PCB design before, and one thing was always clear: the learning curve is steep. Translating a physical breadboard into a proper schematic—and then into a PCB—requires a lot of abstract thinking that isn’t beginner-friendly. We wanted to lower that barrier. The idea behind Tabula was simple: use AI to bridge the gap between hands-on prototyping and professional circuit design, so beginners can focus on building rather than struggling with tools.
What it does
Tabula takes an image of a breadboard circuit and converts it into downloadable KiCad design files. It automatically detects components, traces wire connections, infers electrical connectivity, and generates both a schematic and PCB layout. The goal is to turn a physical prototype into something production-ready in minutes.
How we built it
We built Tabula as a full-stack pipeline combining computer vision and AI. On the backend, we used OpenCV for image preprocessing, perspective correction, and grid detection to model the breadboard structure. Roboflow handled component detection, while Gemini was used for interpreting component values and validating circuits. We then constructed a connectivity graph using NetworkX based on breadboard rules, and finally generated KiCad-compatible schematic and PCB files. The frontend was built with React to provide a clean, fast user experience for uploading images and downloading results.
Challenges we ran into
One of the hardest problems was virtualizing the breadboard into a structured grid using OpenCV. Accurately mapping physical holes and determining which components connect to which nodes required both precise image processing and logical inference. Small errors in alignment or detection could completely break the connectivity graph, so ensuring reliability here was a major challenge.
Accomplishments that we're proud of
We’re proud that we were able to build an end-to-end system that actually works—from image input all the way to real, usable KiCad files. Integrating multiple AI tools with traditional computer vision in such a short time was a big win. We also made the entire pipeline run on free-tier services, making it accessible to anyone.
What we learned
We learned how to combine classical computer vision with modern AI models effectively. This project also deepened our understanding of circuit representation, graph-based modeling, and the practical challenges of translating real-world visuals into structured data. Just as importantly, we learned how to design systems that gracefully handle failure when working with imperfect AI outputs.
What's next for Tabula
Next, we want to train a dedicated Roboflow model specifically for breadboard circuits to significantly improve detection accuracy. We also want to support multi-angle image inputs, allowing users to upload multiple perspectives of the same circuit and combine them into a more accurate reconstruction. Longer term, we’re aiming to make Tabula robust enough for real-world use beyond hackathons, especially for education and rapid prototyping.
Log in or sign up for Devpost to join the conversation.