Inspiration

We were inspired by the challenges visually impaired individuals face when navigating urban environments independently. While tools like canes and guide dogs are helpful, we believe that they have limitations in detecting dynamic or distant obstacles. Therefore, we wanted to build a compact aid that empowers our users, affordably and portably.

What it does

Our device is a real-time computer vision system that detects and alerts the user to nearby obstacles through a webcam feed. The system identifies key objects like people, bicycles, dogs, stairs, crosswalks, traffic lights, and stop signs. Then, it calculates distances to these objects and provides soft, intuitive haptic alerts when a mobile object is approaching or a static obstacle lies ahead, accordingly.

How we built it

We trained a YOLOv5 model to also identify crosswalks and stairs. We stored the computer vision model on a Raspberry Pi for fast, real-time inferences. We used serial communication to implement our haptic feedback.

Challenges we ran into

Optimizing the model to work efficiently on the Raspberry Pi. Training the model to identify crosswalks and staircases correctly. Designing the alert protocols to be intuitive, helpful, and not overwhelming to the user.

Accomplishments that we're proud of

We demonstrated that with further optimization and designing, this could be something very helpful to our stakeholders.

What we learned

We learned to train our own computer vision model and perform real-time inferences on it. We learned to connect hardware components and software components to create a cohesive application.

What's next for Walking Aid for the Visually Impaired

We want to talk to our potential users to gather feedback and suggestions on how to continue to improve the system's detections, algorithms, and haptic feedback to create the experience for them. We want to really create a system that serve as a companion to our users without making their decisions for them. We want to train a more specialized and accurate model that only looks for things that pertain to the pedestrian experience, such as other people, poles, bus stops, crosswalks. We want to implement more robust protocols, such as distance tracking through frames, car and traffic light detection at crosswalks, rail or accessibility accommodations detection at stairs. We want to make our system even more compact, ideally fitting in a clothing pocket. We want to try more powerful compact hardware to further optimize the system.

Built With

Share this project:

Updates