My aunt has an eye disease called Glaucoma which damaged the optic nerves in her eyes. The damage has resulted in severe vision loss, meaning she struggles to find everyday items around her. With advancements in computer vision, it seemed reasonable that we could replace her eyes with an artificial one.

What it does

AEye is an artificial eye. Using a combination of the microphone and camera on the device, someone is able to ask where an object is and be guided to it. Moving around the room will result in a "hot" or "cold" reading.

How we built it

Split into two teams: 3 people concentrating on the mobile app & the UX design; 2 concentrating on the backend and visual recognition training.

We have trained multiple image classifiers on the Watson visual recognition service. This can give a reliable match when the image contains the object we're searching for. This service is wrapped by a Python web service, which maps the desired object class to the associated classifier. This is hosted on a Radix domain,

The iOS app uses the Watson Speech to Text service to translate voice input from user. Once the user selects a class to search for, we give aural feedback ("warmer", "colder", "found it", etc) as the user moves the phone's camera around the room or surface.

Challenges we ran into

  • Watson's free tier for classifying images is limited to 250 calls
  • Keyword detection in the speech to text service

Accomplishments that we're proud of

Got it working early enough that we could go home for a few hours sleep!

What we learned

It was our first time using the Watson APIs. There was a bit of a learning curve getting around the account structure and documentation.

What's next for

Look into celebrity voice expansion packs in order to monitize this product!

Built With

+ 7 more
Share this project: