Inspiration :

To enable people to experience various eye defects and interact with the environment around them. More than 60 percent of people in the world wear glasses (eye defects that are curable or manageable). So there are even more people with eye defects that not only include Myopia, Hyperopia, glaucoma, astigmatism, macular degeneration, color blindness, protanopia, deuteranopia and tritanopia(the ones we mimicked with shaders for you to experience!!). By empathizing, one can care more about the problems faced by such people and help them. They can create better building architecture by testing their architecture using our tool, as well as develop and test tools to help these disabled people. We tested a tool that is Google Vision Cloud Api.

What it does :

You can switch between various eye defects by just pressing a button and experience an art gallery. You can understand that by doing so visually challenged people have immense difficulty in enjoying some amazing views. You can also switch off the eye defect shader by a button and see the visual difference seen by a visually fit person compared to a visually challenged person. One can test and develop applications for these unfortunate people. We have proved this by integrating Google Cloud Vision API and Google Text to Speech API in our project. By a click of a button a person can take a snapshot of what he sees(similar to clicking a picture on a phone) and send it to Google Cloud Vision API. To which we receive a response of what was in the snapshot and convert that to a speech by using Google Cloud Text to Speech API and send the response to the user. One can also see the design flaws of the art gallery museum using this application which proves we are able to improve upon the design aspect of structures that can be used to ameliorate the lives of people with various eye defects.

How we built it :

We built this on Oculus Rift with touch controllers using Unity. We created the eye defects using Unity shaders that made use of the depth buffer to create the mimic the eye defects. We also allowed testing a visual assisting tool that is Google Cloud Vision API by taking snapshots of what you see and get to know what you see. In this process we receive labels containing the information in the snapshot which is then converted to speech using Google Cloud Text to Speech API and send it as an audio to visually challenged user(This proves that we can test some tools using our applications to help the visually challenged). We used a background Python script to run the Google Cloud Services and perform the required operations

Challenges we ran into :

Integrating Google Cloud Vision API with Unity, to which we found a way around. Furthermore, we had problems creating filters for various types of color blindness. Having just a small amount of experience in graphics pipeline was also a challenge.

Accomplishments that we're proud of :

Able to create 10 different forms of eye defects using purely shaders, which allow for a smooth experience. Creation of a good looking art gallery in a short amount of time. Using Google Cloud Vision API.

What we learned:

Learnt how amazing Google Cloud Vision. How using GPU we can accomplish tasks that otherwise would require high amount of CPU bandwidth. How problematic it is for people with various eye defects especially color blindness.

What's next for Visionaries :

Develop shaders for more types of eye defects and promote this application to people who are architects and designers, to use this tool as a way to improve design structures for visually challenged. We would also like to promote the use of this application to people who are inclined to develop tools for visually impaired, so that they can test and understand how to improve their tools.

Share this project: