Inspiration We were inspired by the growing global concern around microplastics — tiny particles that enter our water, our environment, and even our bodies without us ever seeing them. Existing detection methods are slow, expensive, and restricted to advanced laboratories. This motivated us to build a solution that makes microplastic detection fast, affordable, portable, and accessible for everyone. Our goal was simple: to make the invisible visible and empower people to take action toward cleaner water.

How We Built the Project We designed a compact optical detection system using a Raspberry Pi, controlled LED illumination, and a high-quality camera capable of capturing microplastic fluorescence after staining with Nile Red. We assembled a custom 3D-printed enclosure, integrated a quartz/acrylic cuvette for sample flow, and used a band-pass filter to isolate the emission wavelengths. Machine learning models were developed to identify and classify microplastic particles from captured images. The entire system was powered using a Li-ion battery pack, making it fully portable. We optimized both hardware and software to achieve real-time detection and accurate measurement of particle size and count.

What We Learned Throughout this project, we gained hands-on experience in:

Designing optical systems and understanding fluorescence physics

Working with image processing and ML classification models

Managing embedded hardware like Raspberry Pi and camera sensors

Understanding microplastic behavior, properties, and detection limitations

Combining hardware + software + environmental science into one unified solution

We also learned that real-world water samples behave differently from controlled lab samples, which pushed us to refine our preprocessing and detection pipeline.

Challenges We Faced Some of our major challenges included:

Fluorescence inconsistency: Nile Red behaves differently depending on polymer type and staining conditions.

Image noise: Background reflections and uneven illumination required advanced filtering and calibration.

Hardware alignment: Achieving perfect alignment between LEDs, cuvette, and camera demanded multiple iterations of 3D-printed holders.

Model accuracy: ML classification required a diverse dataset of microplastic shapes, sizes, and textures.

Cost optimization: High-quality optical components were expensive, so we spent considerable time reducing the bill of materials while maintaining performance.

Despite these challenges, each step strengthened our understanding and improved our final prototype.

Built With

  • 3d-printing-(enclosure)
  • band?pass
  • bash-libraries/frameworks:-opencv
  • high?resolution-camera-module-backend-(optional):-flask-/-fastapi-database:-sqlite-/-firebase-(optional)-tools:-git/github
  • languages:-python
  • led-optical-system
  • numpy
  • scikit?learn-hardware/platform:-raspberry-pi-4
  • scipy
  • tensorflow/pytorch
Share this project:

Updates