Inspiration

Our mate Lillian had to bring large font textbooks to school because she was visually impaired. These were double the size and weight of ours, but it meant that Lillian was be able to be a normal student and use her textbooks like everyone else.

Problem

There are 575, 000 people in Australia that have visual impairment. They are unable to do every tasks that others would take for granted like reading menus and knowing their surroundings. These are things that most people can do independently, and we wanted to give the same empowerment others have, to those with visual impairment.

Users

  • Those with macular degeneration.
  • Those with low vision.
  • Those who are illiterate.
  • Those with dyslexia.
  • Those with amblyopia.
  • Those with glaucoma.
  • Those with cataracts.
  • Those with diabetic retinopathy.

What EyeDog does

EyeDog is a mobile application that converts image to audio by identifying what is in the camera frame and reading any text that is shown.

How we built it

Challenges we ran into

  • Converting camera data into objects and text.
  • Implementing a working camera application.

Accomplishments that we're proud of

  • In depth use of Google APIs and YOLO.
  • Working application.

What we learned

  • Building a camera application.
  • Working collaboratively with new and challenging technology.
  • Integration of object recognition and text recognition.

What's next for EyeDog

Expand on our features to continue empowering those with vision impairment such as GPS and more detailed object description.

Built With

  • google-text-to-speech-api
  • google-translation-api
  • google-vision-api
  • python
  • react
  • yolo-object-identiciation
Share this project: