Visually-impaired visuals may have trouble gauging what's immediately in front of them, or what they're holding specifically.

What it does

By creating software that accesses your web-cam, takes a picture, and says what you're holding. This could help determine objects pass someone just trying to feel for texture or shape. Also, some objects may be dangerous to hold.

How we built it

This project was eventually built in a stand-alone java application, using the many resources provided by AWS to streamline the whole process.

Challenges we ran into

Originally wanted to use Amazon-Alexa, but the wait time to upload a skill was too long to complete the project by the deadline. Originally, we wanted to use Android as well, but paring different devices, and trying to learn new languages, became a challenge to handle.

Accomplishments that we're proud of

This is everyone's first hackathon, with the exception of one person, who has only been to one other hackathon. For most of us, the act of actually finishing a product in a short time for most of our first times is something we're proud of.

What we learned

What's next for BlindRecognition

Phone functionality and Alexa functionality would make this app much more useful for individuals, allowing them to have mobile on-the-move recognition. The combination of that, with a myo armband, would be great for pure gesture activation.

Built With

Share this project: