We have a classmate who is visually impaired. Although he is an intelligent student, he occasionally struggles to complete certain tasks (identifying objects). We had hoped to build him a device that would enable him to be more independent.

What it does

This application allows the user to use voice commands to take pictures on their phone. The pictures are then run through a trained neural network, which identifies what is in the picture. The output describes what was captured in the photo through speech. It is able to identify objects, and text.

How we built it

We utilized various Google cloud APIs. We use the speech-to-text API to get the users requests, natural language processing to figure out what they’re asking and image recognition to observe the users surroundings for them.

Challenges we ran into

1. One of the technical issues we ran into was trying to continuously accept the users voice commands.

2. We had issues integrating various APIs together on Android.

Accomplishments that we're proud of

  1. Three of our group members had to learn Android app development over the course of 36 hours.
  2. How to incorporate machine learning onto mobile devices.
  3. How to send API requests with Android studio

What we learned

Over the course of 36 hours, our group became adept at Android app development. We also learned how to utilize Google's Natural Language Processing service, as well as Vision API. Other than that, we also learned how to work with the drivers on various smartphones.

What's next for eyeAware

  1. Integrate other hardware so it’s less bulky.
  2. Improve quality of voice recognition, image recognition, language processing
  3. Introduce more functionality such as giving the time, giving directions, etc
Share this project: