SeeThrough:

Inspiration

We started with a simple question:
Why should blind individuals be unable to perceive their surroundings when technology can bridge the gap?

Current assistive technologies either lack real-time responsiveness, require bulky hardware, or fail to provide intuitive feedback. With AI-powered object detection, text reading, and proximity alerts, we set out to build a lightweight, real-time solution that enhances independence and safety.

What It Does

SeeThrough detects objects in front of visually impaired users and describes them aloud using text-to-speech. It can:

  • Identify objects (e.g., "Chair on your left.")
  • Read printed text on command
  • Warn about obstacles using proximity sensors

How We Built It

  • YOLO for real-time object detection
  • ESP32-CAM for live video streaming
  • Ultrasonic Sensors for proximity detection
  • Text-to-Speech (TTS) for spoken feedback
  • Edge AI Optimization for low-latency performance

Challenges We Ran Into

  • Reducing latency for real-time object detection
  • Processing AI models on low-power hardware
  • Ensuring clear speech feedback without overwhelming users

Accomplishments We're Proud Of

  • Successfully implemented real-time object detection
  • Optimized text reading accuracy for better accessibility
  • Developed a compact and user-friendly prototype

What We Learned

  • Optimizing AI for embedded devices is crucial for speed and efficiency.
  • User-friendly feedback matters—balancing information with clarity.
  • Hardware limitations require smart optimizations for performance.

What’s Next for SeeThrough?

  • Integrating Ray-Ban smart glasses or VR headsets for a hands-free experience.
  • Optimizing AI models for greater detection accuracy and efficiency.

Built With

Share this project:

Updates