Inspiration

With the future of space colonization must come new methods and infrastructure to support citizens in far-reaching places. Blind and visually impaired people face many challenges even when performing the simplest day-to-day tasks. One of the most challenging tasks is simply walking and moving around. Without being able to see anything around them, they don't know who, what, or how far anything is away from them. Furthermore, it can be scary to be in a new place and not know what is around you. Unfortunately, the only available solution is the outdated and inefficient walking stick. When using a walking stick, blind and visually impaired people have to continuously tap the stick in front of them until they make contact with an unknown object right in front of them. They will never know when an object is approaching them, how far it is, or even what it is. These problems will only be amplified in space, as walking sticks will not always work in the unfamiliar, harsh conditions of space. Due to all of these problems, we decided to create a device to help guide and enhance the visually impaired's ability to walk.

What it does

Eyeonics is a machine learning-powered, computer vision glove that helps the visually impaired navigate their surroundings. The Eyeonics glove pairs with a Bluetooth headset or speaker to verbally communicate with the user, and continuously alerts them as to what object is in front of them and how far away it is. By knowing the object and its distance, Eyeonics is essentially acting as a digital eyesight device for visually impaired people by converting visual data to audio data.

How we built it

The computer vision glove was created using a Raspberry Pi 3, a plug-in USB Logitech camera, and an ultrasonic sensor. Every 5 seconds, the Raspberry Pi would capture an image using the Logitech camera and calculate the distance to the nearest object in inches using the ultrasonic sensor. The captured image is then converted to base64 and is sent over to a Rust backend server along with the distance to the nearest object. Onn data reception, the server inputs the decoded base64 image into a deep neural network, which was programmed in Python using Google's Tensorflow library, to predict what the object seen in the image is. This machine learning model then analyzes the image for various objects detected and returns the closest predicted object back to the server. The server then stores the current datetime, distance, original image, and predicted object in a Firebase realtime database. After storage, the server returns the predicted object back to the Raspberry Pi, where it plays an audio message through the user's connected Bluetooth headphones or speaker alerting them about the object in front of them as well as the distance away from the user. These steps are then repeated to continuously alert the user of how far detected objects are around them and keep generally keep them aware of their surroundings.

Challenges we ran into

Training the model effectively was a difficult task. It took many iterations to get the model to a state where it could identify images properly. Additionally, setting up the code that called the model (written in Python) from the backend server (written in Rust) was also challenging. It took a lot of research and trial-and-error to correctly set up the foreign function interface. Additionally, it was difficult to get the Logitech camera to capture the images as well as get the Bluetooth functionality on the Raspberry Pi working and playing audio messages.

Accomplishments that we're proud of

We are proud to have constructed and developed a working device that can assist blind and visually impaired people in their day-to-day activities and navigation. It was our team's first time creating a machine learning detection model as well as creating a glove-based device. We are also proud to have made a product that works fluidly between the glove, backend server, and machine learning model and continuously alerts the user. Lastly, we are proud to have gotten the Bluetooth functionality working so that users would not need to use Eyeonics with a wired headset.

What we learned

While working on this project, we learned how to implement a machine learning model into a backend server as well as create a functional embedded system that could be worn and used on its own. We also learned a lot about the challenges faced by blind and visually impaired people, and how outdated and inefficient current solutions are. Additionally, we learned that pairing a computer vision glove with realtime artificial intelligence could eliminate the necessity of eyesight in performing day-to-day tasks such as walking.

What's next for Eyeonics

Eyeonics has real potential in both today's and tomorrow's world. People with visual impairments ranging from mild astigmatism to blindness can use this product in their daily lives to help navigate their surroundings. It could be enhanced to also vibrate the glove when an object is detected and an option to only hear about the surroundings when a button is clicked on the glove. However, if implemented in space, this product has a lot of functionality, is much more durable than its alternatives, and is designed from a cost-effective standpoint. In the future, space colonies can readily adopt this technology, as it would not only be the only viable solution for blind people to be able to move around, but it would also serve as a means of detecting objects in treacherous climate conditions and help give scientists more room to work within regards to extraterrestrial research. As for now, Eyeonics is ready to revolutionize the way the blind and visually impaired get around and navigate their surroundings!

Built With

Share this project:

Updates