My partner and I can empathize with those that have disabilities. Because of this, we are passionate about doing our part in making the world a better place for them. This app can help blind people in navigating the world around them, making life easier and less dangerous.
What it Does
Droid Eyes can help people with a loss of sight to go through their life in a safer way. By implementing google vision and voice technology of accessible smartphones, this app will narrate the path of a person, either out loud or through headphones by preference. For example, if a blind person is approaching a red light, the app will notify them to stop until it is green.
How We Built it
Hardware: We first created a CAD design for a case that would hold the phone implementing the program, creating holes for the straps, speaker, and camera. This sketch was laser printed and put together via hot glue gun. As for the straps, we removed those from a reusable shopping bag to hold the case. The initial goal was to utilize a Raspi and create an entirely new product. However, we decided that a singular application will have a greater outreach. Software: We utilized the Android development environment in order to prototype a working application. The image recognition is done on Google’s side with Google Cloud Vision API. To communicate with the API, we used a variety of software dependencies on the Android end, such as Apache Commons and Volley. The application is capable of utilizing both WIFI and cellular data in order to be practical in most scenarios.
Challenges We Ran Into
Hardware: We first intended to 3D print our case, as designed on CAD. However, when exporting the file to the Makerbot Software, no details were shown of the case. After several attempts to fix this issue, we simply decided to use the same design but laser printed instead. Software: Uploading the pictures and identifying the objects in them was not occurring in an efficient speed. This was because the API provided for Android would only allow batch photo uploads. This feature takes more time to transfer the picture as well as forcing the server to examine sixteen photos instead of one. Also, some of the dependencies were outdated, and Android did not build the application. Getting the camera to work autonomously was another struggle we faced as well.
Accomplishments That We’re Proud of
When we entered this hackathon, this app was barely an idea. Through many hours of intense work, we created something that could hopefully change people’s lives for the better. We are very proud of this as well as what we learned personally throughout this project.
What We Learned
In terms of hardware, we learned how to laser print objects. This can be very helpful in the future when creating material that can easily be put together, to save us the time of 3D printing. For our software, we used in part google vision for the first time. This API was what identifies the elements of each picture in our application.
What’s Next for Droid Eyes?
We hope to expand upon this idea in the future, making it more widely available to other Android phones and Apple as well. By spreading the product to different devices, we hope to keep it open source so that many people can contribute by constantly improving it. We would also like to be able to 3D print a case instead of laser printing and then gluing together.