Inspiration

We wanted to create a project that could help our community by assisting those with disabilities. We wanted to tackle the everyday problems a visually impaired blind person would run into. The ability to move around in a dynamic environment as well as the ability to tell differentiate between different objects

What it does

The walkie-bot helps a blind person navigate through and identify surrounding objects using audio guidance. A visually impaired person will be able to walk around different obstacles without bumping into objects using the input he or she will be receiving from the buzzing sound. If he or she would like to identify the object being detected for the purpose of understanding the size or the maneuver needed to go around that obstacle all they need to do is look at the direction of the object (towards the specific buzzer's direction) and click a button. This snaps a photo and identifies the object giving a description of the object, for example a photo of a chair would output seat, chair, furniture.

How we built it

In the first module we used an arduino is connected to two ultrasonic sensors and two buzzers. The ultrasonic sensors detect the proximity of objects withing reach and output different frequency sounds via the buzzers. This works similar to a parking sensor in which the closer the object gets is the faster the audio output will be. A dual setup allows for a wider field of view as well as better orientation for objects situated on the right or left of the blind person.

A raspberry-pi was connected to a camera module as well as a button. The camera module would sit on a head strap, which allows the person to look at the object they want identify and click the button connected to the raspberry pi. This snaps a photo of the object, sends it to a server that uses amazon's rekognition service, and receives a file that contains different objects that can be represented in that photo. Using an algorithm we decode this file and choose a specific word to output via the 3.5 headphone jack using text to speech.

Challenges we ran into

  • Identifying a malfunction in one of the ultrasonic sensors.
  • Getting both ultrasonic sensors to read and the buzzers to output in parallel.
  • Learning how to parse json files to extract data.
  • Learning how to use e-speech to read the data from the json file as audio output.
  • Assembling a method to mount the devices onto a person with limited resources.

Accomplishments that we're proud of

We were able to navigate blindfolded using only buzzer input to maneuver between various obstacles. The camera was able to recognize different objects and giving a descriptive output for a blind person to approximate the object's size.

What we learned

We learned how to have the arduino run 2 tasks simultaneously (input and output) while still looping through the code sequentially. We learned how to use python to send an image file from the raspberry-pi to an ip. We also learned how to parse a json file and have the e-speech function read which parts of the file we needed.

What's next for Walkie-Bot

There are many ways this project can be enhanced. For example we would like to switch to LiDAR sensor instead of ultrasonic sensors, this would allow for a wider range of view as well as better mapping ability. Ultrasonic sensors work best when perpendicular to the object the distance is measured from, they also have a very narrow range of view. Would also like to apply some image processing before sending out the signal to the servers for object detection, this will allow for more accurate results.

Share this project:
×

Updates