Inspiration
EyeSee was inspired by the need to improve winter navigation for the visually impaired. We saw an opportunity to harness machine learning for creating a more intuitive and effective aid, addressing the specific challenges posed by snow and ice.
Functionalities
EyeSee utilizes a Raspberry Pi-mounted glasses system, equipped with an machine learning pipeline which we used to train on images of snow nearby our house. From different tools for image loading and processing, the pipeline utilizes torchvision.detection.models and SDD for real-time box detection of snow. Particularly in hazardous winter conditions, this allows quick and preventive warnings for pedestrians who might be cognizant of snow on the ground or for those who are visually impaired.
The system comprises dual cameras: one captures visible light, while the other is tuned for near-infrared imaging. The visible light camera feeds images to the neural network for analysis, and the near-infrared camera can be used to further enhance object detection accuracy in the scene.
It utilizes machine learning for quick object detection and an on-device inference system that can be used to detect snow after which EyeSee triggers a buzzer alert. This notification enables users to take immediate precautions or seek alternative paths, guided by the direction where the alert is not sounded. This intuitive design allows users to navigate more safely and confidently in snowy environments.
Embedded System and ML infrastructure build
EyeSee combines a Raspberry Pi with an open-source Single Shot MultiBox Detector (SSD) model from Torchvision for real-time inference. This setup processes images from the dual cameras mounted at the front. Connected to the glasses is a head/neck band, which holds the Raspberry Pi and two 18650 batteries.
Challenges
Initial research revealed that snow uniquely absorbs infrared light at around 1500nm wavelengths, a feature not shared by most materials appearing white in visible light. However, the hurdle we faced was the prohibitive cost of short-wave infrared cameras capable of detecting this spectrum.
Our solution involved adopting a Raspberry Pi camera module sans the IR-cut filter, which could detect up to 900nm wavelengths. While not optimal, snow still showed reduced reflectance at these wavelengths. To focus on infrared detection, we made a custom 3D-printed mount for an IR-pass filter, transforming the camera into an IR-specific device. This approach provided a cost-effective and technically viable solution to our unique challenge.
Accomplishments that we're proud of
In developing EyeSee, the two of us combined our unique strengths and perspectives, making the project a deeply personal and collaborative achievement. Overcoming technical and design challenges together, we transformed an innovative concept into a practical solution for the visually impaired.
What we learned
Throughout the project, we gained valuable insights into the iterative process of prototyping and design. Technically, we advanced our expertise in Raspberry Pi programming and in using PyTorch for model testing and training. We learned the importance of breaking down complex ideas into manageable steps, allowing for effective implementation. Moreover, we realized the critical importance of time management in bringing ideas to fruition.
What's next for EyeSee: Enhancing Winter Safety for the Visually Impaired
We plan to integrate still image depth estimation, which would allow us to provide varying levels of warning based on how far away the snow or hazard is, enhancing the user's situational awareness and safety.
Built With
- python
- pytorch
- raspberry-pi
- torchvision.models.detection
Log in or sign up for Devpost to join the conversation.