We were inspired by visually impaired people who have poor long range vision.

What it does

It shows footage from the webcam of a computer and displays what it sees as text. It organizes what it sees by how prevalent it is in the scene. More significant objects are written in larger text. Also, it converts the most prevalent things it sees into speech.

How we built it

We used Eclipse with Java with swing, clarifai, webcam-capture, and free-tts.

Challenges we ran into

We struggled with camera drivers and dependency management. We had to learn many new API's in a short amount of time, which we struggled with.

Accomplishments that we're proud of

We are proud of being able to use many various dependencies in unison to build our final product.

What we learned

We learned many different API's, debugging skills, and proper dependency management.

What's next for EZSee - A tool for the visually impaired

The repository is up on github for anyone to learn from and build a better, less hacky version.

Built With

Share this project: