Inspiration
From walking a blind man a few times a week, you begin to realize how vulnerable they are to fluctuations in their environment. From trash cans to parked cars, everything becomes a herculean task for them to identify and overcome. While bumping into some obstacles is part and parcel wherever you live, falls are a completely different story. While an able-bodied individual would experience embarrassment, others could get hurt, become lost, and lose valuables. We can avoid these issues altogether by changing how blind people move. Few products help visually impaired people navigate the world. Canes are commonly used by the visually impaired, but provide little insight beyond the few feet in front of them. These tools require extreme finesse to prepare the user for any curbs or obstacles in their path. Guide dogs improve upon the range of a stick, they are trained to sense dangerous traffic and ledges that could hurt their charge. Although better in navigation, guide dogs cost 25 to 50 thousand dollars in initial fees alone and require training and further upkeep to use, this is an extreme burden on any household. Outside of these kinds of solutions, visually impaired people are left dependent on others for help. We developed Digi-Sense to provide a technical solution to interpreting a visual world to one of sound and tactile feedback.
What it does
Digi-sense uses cameras, ultrasonic sensors, and AWS-Rekognition image processing to detect obstructions in real-time and warn the user where they are. In our idealized model, we would utilize a Bluetooth speaker to identify physical objects to a user, and a directional haptic feedback belt to detect physical objects relative to the user. Worn in a fanny pack and belt around the waist, Digi-sense is compact and portable while interpreting the world. Our physical device is limited in implementation to stationary raspberry pi with a command-line interpretation of the outside world.
How we built it
We chose to use a raspberry pi to house our product to take advantage of the portability a battery-powered pi can use to act as a wearable device for our user. We have two main functions of our device that we split into asynchronous functions, object detection, and object identification. We detect objects by using an ultrasonic sensor on a circuit connected to GPIO pins on the Pi that read the output voltage of the sensor, allowing us to calculate the distance the input signal is traveling. To identify objects, we use a web camera connected via USB to the pi, and we run a bash command to take a photo and write it to a file on the device. The photo is then sent to AWS Rekognition via their PHP SDK APIs, which then return a PHP associative array filled with likely labels of objects in the image. Our goal was to parse these labels by measuring what third of the image they were on -- left, center, and right -- and then identify which labels were the largest and use a Bluetooth speaker the user has on them to identify a blockage. We were unable to get a Bluetooth speaker for our project, we did not parse the information returned from the AWS Rekognition API, and consequently, we were unable to provide an output for the user in our prototype design.
Challenges we ran into
We had design problems when considering what type of platform was right for our idea. We initially thought of only using AWS Rekognition for object detection and object identification, so we started designing a web app. This lead to problems when we realized that we needed to accommodate a hardware interface with an ultrasonic sensor. We pivoted to a PHP based local app that used the AWS Rekognition API and python to measure the GPIO input and output on our Raspberry pi. We had major challenges implementing the AWS Rekognition API. This API required intimate knowledge of AWS software stacks and detailed step by step guides to configure for a new user and app. We spent over 5 hours on reading documentation and guides before we were able to ask for help from returning mentors and guides. This configuration time was difficult and heartbreaking, but in the end, we were proud and excited to use our hard-fought knowledge on our app.
Accomplishments that we are proud of
We are proud of our planning efforts. At the beginning of the hack, we spent a couple of hours coming up with ideas, problems, and potential solutions. We discussed problems we have seen impact people we know, and how we could help them with their issues. We weren’t sure where we wanted to go until William Evernden told us of a visually impaired man that he volunteered to guide and help with his daily tasks. William had seen this man’s struggle, experienced his frustrations of dependency, William quickly convinced us of what to focus on during our time here. We spent hours studying how canes and guide dogs have impacted visually impaired people’s lives, and how sometimes these things weren’t enough. We found that canes can be unwieldy indoors, and dogs can be costly or difficult to travel with. We found a problem that we could solve.
What we learned
Our group had a large difference in exposure to programming and design and engineering. Cameron Lewis had experience in programming, William Evernden had experience in design and understanding problems, and Connor Howard had a background in circuits. Each of our depths of knowledge allowed us to teach and support each other. When we were considering which technologies to use, we could draw on Cameron’s experience in PHP to teach each other how to connect our contributions. Connor’s ultrasonic sensor circuit design was connected to the raspberry pi where Cameron and Connor collaborated to connect to the Python code processing the circuit. This collaboration allowed us to explore concepts new to us all. None of us had programmed in Python or used AWS in any form, but we could research the topic together and break down the problem into smaller pieces together.
What's next for Digi-Sense
With this product, we would like to produce a companion device that specializes in adaptable pathing, Digi-Sense currently detects objects, but doesn’t provide the user with information as to how to get around the obstacle. Utilizing current sensors along with GPS and streetmaps, we could provide the user with directions and alternative routes on the fly. This device would mainly be used for smaller changes in construction and traveling that wouldn’t be reported to a broad public audience.
Continuing our work with the visually impaired, we hope to make instruments that teach better cane skills, helping users walk straighter. When a person becomes blind, training is required to learn how to use a cane and what to look out for. Rather than hiring an O&M Specialist for 70 to 150 dollars an hour, this device would observe the user’s cane use and path position to tell the user where they can improve. This solution would improve skill acquisition speed and support our users while they learn how to walk on their own.
Built With
- amazon-web-services
- bash-script
- php
- python
- raspberry-pi
- raspberry-pi-4
- rekognition
- shell-script
- ultrasonic-sensor
- webcam
Log in or sign up for Devpost to join the conversation.