Over 35 million Americans have visual impairments rendering them blind. We believe with current resources we can offer these people an opportunity to live a more independent lifestyle. From stumbling on obstacles or misplacing medicine, blind people should not have to suffer from their visual impediments.
What it does
OcuVision is an innovative solution to help blind people carry out daily chores without the need of a caregiver, by enabling them to more easily identify important daily items such as inhalers and medication bottles. In addition, the caregiver has access to a live updated feed of objects their patients interact with.
How we built it
OcuVision utilizes an Amazon Echo Dot to catch phrases that trigger a lambda function in Amazon Web Services. This function communicates with their simple queue service, which is read by a python script to call to action a TensorFlow Neural Network based image classification algorithm. This result is relayed back to the Amazon Echo Dot again through the simple queue service for Alexa to relay a verbal identification message.
Challenges we ran into
It was difficult navigating through the Amazon Web Services Lambda functions and Alexa Developer Skills modules simultaneously.
Accomplishments that we're proud of
We were able to successfully integrate Alexa API, AWS, and Tensorflow into a user-friendly chain of events.
What we learned
We learned how to connect Alexa skills with Lambda functions and how to communicate with the Simple Queue Service from our local python scripts.
What's next for OcuVision
Next steps include making the product more accessible by obtaining the image from a source smaller than a laptop camera, such as through a raspberry pi or other camera module to make the algorithm more implementable as an actual product.
Table 29, Thomas Jefferson High School for Science and Technology