It began during a late evening in our college microbiology lab. A single slide under the microscope showed faint organisms, but each of us interpreted their shapes differently. Our mentor calmly asked, “If every observer sees something else, how do we measure truth?” That question sparked our idea: the microscope shouldn’t just display—it should understand.

Our project transforms a conventional microscope into an intelligent diagnostic system. It examines stained slides in real time, recognizing morphology, pathogenic signatures, and species probability with consistency. Instead of waiting days for cultures or relying on subjective interpretation, clinicians receive fast, standardized insights. The tool bridges observation and diagnosis, enabling reliable decision-making even in smaller laboratories. In addition to the hardware-integrated system, we also provide a website-based diagnosis platform, allowing users to upload images or videos and receive AI-generated interpretations remotely. This makes the technology accessible to institutes that already have a microscope and camera but lack specialized hardware. Thanks to its compact and portable design, the system can also be deployed in medical camps and low-resource regions, enabling rapid diagnosis, early outbreak detection, and timely public-health interventions where they matter most.

Portability and accessibility are achieved through a software–hardware model. A compact motorized stage, standard camera sensor, and embedded controller pair with a deep-learning pipeline. The components are non-proprietary and easily assembled, allowing academic labs, clinics, and forensic teams to deploy the system without specialized infrastructure. For field researchers working with deep-sea samples or biological evidence, the microscope becomes a travel-ready companion capable of analyzing specimens at the source.

As we built the system, we encountered several challenges that shaped its evolution. Light microscopy often struggles to reveal the true morphology of live cells, so we introduced an opaque condenser disk to capture only scattered light and improve contrast. Running AI models initially required a full computer, pushing us to integrate a Raspberry Pi capable of both inference and stage-motor control. To support institutes that already have microscopes and cameras, we also developed a website-based detection platform where uploaded images or videos can be analyzed instantly.

Along the way, we successfully created an AI model that distinguishes bacteria by their shapes, deployed it for both real-time and web-based detection, engineered microcontroller-driven stage control, and synchronized it with the AI pipeline for autonomous scanning.

This journey strengthened our skills in data collection and annotation, AI training, GPU-accelerated workflows on cloud platforms like Google Colab, stepper-motor control using microcontrollers, and embedding YOLO models within Arduino-based systems.

Going forward, we aim to scale the system using unsupervised deep-learning methods to handle large datasets with greater accuracy, while expanding our AI model development to new diagnostic and scientific and investigative fields.

Built With

Share this project:

Updates