The Story of CircuitMind

The Spark of Inspiration

As a Computer Engineering student at the University of Lagos, I spent countless hours in the lab squinting at breadboards and cross-referencing datasheets. The "Aha!" moment came during a particularly frustrating session with a BJT amplifier circuit. I realized that while software developers have AI pair programmers, hardware enthusiasts are often left to debug complex physical layouts manually. I wanted to build a "Lab Partner" that could see what I see and understand the physics behind the wires—moving from abstract theory to embodied hardware reasoning.

How I Built It

The architecture of CircuitMind is designed to bridge the gap between visual input and technical execution by leveraging the Google AI stack:

  • The Brain: I utilized Google AI Studio to leverage multimodal LLMs, enabling the app to "read" circuit diagrams and analyze video feeds of physical hardware.
  • Edge-AI Integration: In my latest iteration, I expanded the scope to include Edge-AI Autonomous Tracking. The system generates roadmaps for an ESP32-S3 with PSRAM to run TinyML models (TFLite Micro) for real-time person detection.
  • The Interface: CircuitMind generates a complete engineering roadmap, including a Bill of Materials, wiring architectures for components like the OV2640 camera, and C++ firmware utilizing hardware interrupts for power efficiency.

Challenges Faced

The most significant hurdle was Multimodal Accuracy in "messy" real-world environments. Teaching an AI to distinguish between a jumper wire and a trace in low-light breadboard photos required precise "Chain-of-Visual-Thought" prompting.

Additionally, ensuring mathematical rigor was essential so the AI wouldn't "hallucinate" engineering values. For instance, when designing a Chebyshev filter for signal conditioning, the system must validate the transfer function:

$$|H(j\Omega)| = \frac{1}{\sqrt{1 + \epsilon^2 C_n^2(\Omega/\Omega_c)}}$$

Similarly, for the BJT-based alert triggers in the threat detection system, it must accurately calculate the voltage gain $A_v$:

$$A_v = - \frac{h_{fe} R_C}{h_{ie} + (1 + h_{fe}) R_E}$$

What I Learned: From LLM to LWM

This project represents a critical evolution in my understanding of Artificial Intelligence. I have moved beyond the paradigm of Large Language Models (LLMs), which operate primarily on linguistic probability, to Large World Models (LWMs). Unlike traditional models, an LWM requires a "visual nervous system" to perceive, reason, and validate physical reality in real-time.

By deploying TinyML on the edge—specifically utilizing the ESP32-S3 to run TFLite Micro—I learned how to ground AI in the physical world. CircuitMind demonstrates that when AI is given the capacity for embodied reasoning, it transforms from a text generator into a powerful laboratory assistant that understands the spatial and electrical context of a circuit. This shift is essential for the future of Embodied AI, where the model must navigate the complexities of hardware design and real-world physics.

Built With

Share this project:

Updates

posted an update

Project Name: CircuitMind

Tagline

"Bridging the Physical-Digital Divide: A Multimodal AI Framework for Embodied Hardware Reasoning."


About the Project

The Inspiration: The "Silent" Hardware Barrier

In software development, we enjoy the luxury of "perfect information"—compilers catch syntax errors instantly. In hardware engineering, information is "silent," physical, and often hidden behind overlapping wires or obscure datasheets. While designing complex embedded systems for my CPE 512 course at the University of Lagos, I realized the primary bottleneck in innovation is the high friction of physical debugging. I was inspired to create CircuitMind to give hardware the "compiler" it never had: a multimodal AI that perceives, reasons, and validates physical circuits in real-time.

The Build: Integrating the Google AI Stack

I architected CircuitMind using a multimodal pipeline powered by Gemini 3 Flash via Google AI Studio.

  • Visual Reasoning: The system treats a breadboard or PCB not as a static image, but as a dynamic graph, using "Chain-of-Visual-Thought" to identify components and trace connections.
  • Edge-AI & Computer Vision: For the demo project—an Edge-AI Autonomous Tracking & Threat Detection Camera—CircuitMind generated a full roadmap using an ESP32-S3 with PSRAM to run a TinyML model (TFLite Micro).
  • Firmware & Control Logic: CircuitMind provided the C++ firmware to manage a dual-axis pan-tilt mechanism, local alert systems, and real-time person detection logic using the esp_camera and TensorFlowLite_ESP32 libraries.
  • Mathematical Rigor: The system validates the signal processing and filter synthesis required for stable sensor data. For instance, it ensures the transfer function $H(s)$ meets ripple specifications for noise reduction:

$$|H(j\Omega)| = \frac{1}{\sqrt{1 + \epsilon^2 C_n^2(\Omega/\Omega_c)}}$$

Technical Challenges: Solving for "Real-World Noise"

  1. Spatial Ambiguity: Identifying components in a high-density, multi-wire setup like an ESP32-S3 camera module requires intensive spatial reasoning. I implemented prompting that forces the model to perform spatial cross-referencing to verify pinouts (e.g., SDA/SCL and PWM pins) before suggesting code.
  2. Precision vs. Hallucination: To ensure safety in threat detection, I integrated a verification layer for BJT-based alert circuits, calculating voltage gain $A_v$ to ensure the buzzer/alarm trigger is within logic-level thresholds:

$$A_v = - \frac{h_{fe} R_C}{h_{ie} + (1 + h_{fe}) R_E}$$

What I Learned: The Future of Embodied AI

This project taught me that the next frontier of AI isn't just Large Language Models—it’s Large World Models. I learned how to deploy TinyML on the edge and manage the trade-offs between model latency and reasoning depth. CircuitMind proved that AI can democratize complex engineering, turning a smartphone into a powerful laboratory assistant that lowers the barrier for the next generation of hardware innovators.

Log in or sign up for Devpost to join the conversation.