We wanted to create something for people with accessibilities. Blind people can't really know the world around them, so they use their other senses to form an image: hear, touch. What we tried to build is an application that will describe what's around them, so they can get a clearer image of the real world.

What it does

Accesses their phone's camera and uses tensorflow to recognize specific objects in the picture and form descriptive sentences about them, which then are spoken by the app.

How we built it

Using a tensorflow algorithm and Microsoft Azure for storage and doing the complex computations.

Challenges we ran into

The training part of the algorithm takes very long, as it is a very complex algorithm, so we had to scale it down, and this resulted in less accurate results.

Accomplishments that we are proud of

We managed to fix our bugs and make it work, even given the fact that we had a limited amount of time. We lost about 5 hours while the algorithm was creating settings and training, but we managed to catch up after and finalise the project. With this, we could create a much clearer image of the world for blind people.

What's next for Eye of Horus

Properly training of the algorithm and sending feedback in real time, instead of just sending separate pictures

Share this project: