Inspiration

We were inspired by the lack of accessible, real-time navigation tools for visually impaired individuals. While tools like Google Maps provide directions, they don’t actively interpret surroundings or warn users about immediate obstacles. We wanted to create something that bridges that gap making navigation safer, more intuitive, and more independent.

What it does

ClearPath is a smart assistive system that detects objects in real time using computer vision and provides audio feedback to the user. It identifies key obstacles like people, walls, and objects, and converts them into spoken alerts through headphones. The goal is to act as a “second set of eyes” for users, helping them move safely through their environment.

How we built it

We built ClearPath using a Raspberry Pi connected to a camera module for live video capture. For object detection, we used YOLO to identify objects like “person” in real time. The detected labels are then sent to a text-to-speech system (like eSpeak), which converts them into audio output played through wired headphones. We also implemented communication between a development machine and the Pi for processing and streaming.

Challenges we ran into

One major challenge was setting up reliable audio output on the Raspberry Pi, especially when dealing with Bluetooth devices and audio sinks. We also faced issues with latency in object detection and ensuring real-time performance. Another challenge was filtering detections so the system doesn’t overwhelm the user with constant or unnecessary audio alerts.

Accomplishments that we're proud of

We successfully built a working prototype that can detect people and immediately provide audio feedback. We overcame hardware/software integration issues and created a system that runs end-to-end from camera input to spoken output. Most importantly, we created something that has real-world impact and accessibility potential.

What we learned

We learned how to integrate hardware with machine learning models in a real-time system. We gained experience troubleshooting low-level issues like audio drivers and system processes, and improved our understanding of deploying AI models on edge devices. We also learned the importance of user-centered design, especially for accessibility tools.

What's next for ClearPath

Next, we want to improve detection accuracy and expand the range of recognizable objects. We also plan to add spatial awareness (e.g., “person on your left”) and vibration feedback for multi-sensory alerts. Eventually, we aim to miniaturize the system into a wearable form, like a smart walking stick or glasses, and optimize it for everyday use.

Built With

Share this project:

Updates