Inspiration
As someone who grew up a visual impairement in one of my eyes, it was clear to me how difficult it can be navigate your surroundings without external help. This project is mainly inspired by assisted parking technology and Spider-Mans "spidey-sense", as both don't require direct sight in order to feed information about surroundings to the user.
What it does
This project aims provides a low cost and low training assistive tech for the visually impaired around the world. Using TinyML and a variety of sensors, Buzzy Sense translates information about the surroundings into into buzzes and clicks generated by a trio of of buzzers around the users head. Our model is currently trained to recognize some everyday obstacles the user may face such as ascending stairs, descending stairs, tables, and chairs, and notifies the user accordingly with customizable buzz patterns. In addition, any obstacle that is not within our model is detected by a pair of ultrasonic sensors near the users temples, and an IR obstacle sensor to detect for any sudden change in elevation near the users feet (a pothole for example). This combination of identifying common obstacles a visually impaired user may struggle with in an unfamiliar setting, while also keeping a general overview of the users surrounding environment.
How we built it
Using a sample data of stairs, tables, and chairs from open data sets on kaggle, we trained and ran our ML model using Edge Impulse and Qualcomms TinyML Arduino Kit. In order to fine tune the model, we added over 200 images of our sample list of obstacles from around the Myhal building. We then loaded the model onto our TinyML kit and proceeded to integrate the general sensor network into our code and hardware.
Challenges we ran into
Most of our challenges were hardware related with some complications with learning how to effectively incorporate ML into the rest of our hardware. Our hardware was greatly limited to one of each component piece from the MakeUofT hardware library, and not every component we wanted was available. The components used were generally unideal for our applications due to their size and limited range in proximity detection. However, after much tinkering we managed to fit and create an effective method of detecting the overall surroundings while giving the user enough haptic feedback to navigate without much assistance.
As for incorporating ML, the TinyML sheild took up all pins available on the kits Arduino Nano 33 BLE Sense, while our buzzer and sensor network took up all pins available on our Arduino Uno board. We wanted one cohesive script to have the two parts working in conjunction with one another to provide the best experience for the user. Initially we tried using bluetooth however it failed to connect correctly and we were pressed for time. Eventually we found a possible solution through cirect connection between the two boards.
Accomplishments that we're proud of
The model for TinyML proved quite adept at identifying our listed obstacles after some time and effort was put into learning how to effectively build and train the model.
What we learned
We learned how to use Edge Impulse in order to build and generate models, in addition to implementing it on an arduino nano. We also learned how to better use C++ as most of our experiences with coding were in other languages.
What's next for Buzzy Sense
Building a better model with a larger range of identifiable objects is a high priority for us as there are a great variety of different hazards depending on the users environment. In addition to that, the ability for the user to customize the haptic feedbacks for each type of object detected would be useful, essentially allowing the user to build their own library of haptic feedbacks for daily obstructions. This would most likely be been done through an app that connects to Buzze Sense and would use an audio screenreader feature so that users may navigate settings without the need for sight.
Log in or sign up for Devpost to join the conversation.