Inspiration

The app is inspired by the advances in machine learning that can be leveraged for social good. Advances in deep neural networks, inexpensive high quality cameras and high speed cloud computing.

What it does

It assists the visually impaired to navigate their surroundings by telling them about common obstacles like table, chair etc. in their path. It also recognizes people they know personally and notifies them if a friend is in the surroundings.The app detecs the object and an approximation of where the object/person is(left, right). This is done by capturing pictures at an interval of 2 seconds, analysing the image in the cloud and detecting common objects and people in the image. The user can either attach the phone to the cane or wear it around his neck.

How we built it

The product uses an Android App which captures images and communicates to a server. The server is a python module that is hosted on Google Cloud. The python module makes use of google-cloud-vision and open-cv to detect objects and identify people.It uses text-to-speech conversion for notifying the user.

Challenges we ran into

Recognizing people when multiple people are in the frame. Complications while running on Google Cloud(primarily because of inexperience with the platform). Detect

Accomplishments that we're proud of

Recognises tables and chairs with high accuracy and precision. Recognizes people fairly well.

What we learned

Using opencv to detect faces. Running python api on Google Cloud. Running text to speech on phones.

What's next for Open Iris

Right now, the domain of objects that OpenIris identifies is limited to tables and chairs. It can include multiple common objects like garbage can, tree, dog, stairs etc. The people detection accuracy can be improved. Detect dept in images which would enable us to give the number of "steps" to a given obstacle.

Built With

Share this project:
×

Updates