My goal in developing EyeSite is to help make information and images on the internet more accessible to those who suffer from loss of sight. We live in such a visually driven world. It is through our eyes that we are able to gain and share information, view and exchange photos, and glean and access a cumulative depth of knowledge through the internet. I have often imagined what my life would be like if I were to suddenly experience a loss of vision. That fear is real and personal. My grandmother lost her vision due to complications of diabetes, and with it, lost a large part of her independence. I watched her frustration and struggle, as she could no longer rely on a few simple keystrokes to open up a magical world that could once enable her to receive instant updates on current events, make travel plans, find the perfect combination of ingredients for a recipe, or simply pass the time by reading up on the neighborhood gossip. With her loss of vision, our universal gateway to information, the internet, grew dark. While a screen reader did partially remediate this issue, without sight, she could no longer rely on the images on a computer screen for guidance or information. And she is not alone. As of 2012 there were 285 million visually impaired people in the world, of which 246 million had low vision and 39 million were blind. My goal in developing EyeSite is to help create a tool to make the information and images on the internet become more accessible to those who suffer from loss of sight, like my grandmother.
What it does
EyeSite is a Google Chrome Extension that takes the images on a webpage and describes their content for the visually impaired. As many web developers neglect to describe the content of their images with ALT tags for their images when writing HTML, people who only have access to the web via means of a screen reader are often simply unable to "see" images on the web. They can only "see" the images if they are described to them in text and dictated to them by their screen reader. Via the Google Vision API, my extension describes "alt-tag"less images. It identifies the objects in a given picture, it gauges how many people (if any) are in an image, it detects whether those people are happy/sad/angry/surprised, it identifies any famous landmarks, it identifies any logos, and it describes the image's colors to the user. The extension also transcribes any text within images, which screen readers likewise are normally unable to detect, so the visually impaired can hear messages depicted within an image.
In a sentence, this extension makes the web a bit more accessible for the visually impaired.
How I built it
Back-End - Node.js, connects to Google Vision APIs to describe images (labelDetection, faceDetection, landmarkDetection, logoDetection, documentTextDetection, imageProperties)
Challenges I ran into
Crunch for time - I pulled an all-nighter in order to get the back-end working bug free.
Getting async to work properly - I've never used async/await before, and I spent a lot of time shooting messages back and fourth with a mentor, Will Gu (thanks again!), in order to understand it conceptually and debug it in my application.
Organizing 6 different calls to the Google Vision API, and figuring out how to organize the troves of awesome data it dumped on me.
Accomplishments that I'm proud of
Getting a functional extension out by the end of LA Hacks that I feel actually has the power to improve the quality of life of people in the world around us.
Working more-so on the back-end than the front-end - usually it's the opposite for me.
What I learned
What's next for EyeSite
I spent most of the hackathon working on implementing functionality on the backend, which I felt should be prioritized over the frontend's visual design. Still, it would be nice if the front-end were visually appealing for snapshots and demonstrations. First and foremost, however, I would like to seek out and ask people who are visually impaired to see what other features they would like implemented in an accessibility extension.