A Google Cardboard AR tool for designers to see how their work affects people with vision impairments.
Inspiration
Many designers are not conscious of the fact that their work can exclude people with different abilities. This comes from the fact that they have never experienced those abilities before.
According to Microsoft’s Inclusive Design Booklet, “If we use ourselves at a starting point, we can end up with products designed for people who are young, English-speaking, tech-literate, able-bodied, and right-handed. Plus, those with money, time, and a social network. If we’re designing for ourselves as a baseline, we can overlook people with circumstances different from ours.”
So how can we change this? First, we looked at how other design fields (such as industrial design and architecture) try to conquer human diversity. A few great examples of this are the OXO Potato Peeler that was initially created for people with arthritis and the first electric toothbrush, the Broxodent that was initially created for people with mobility impairments.
Our conclusions from these inspirations were the same as Michael Wolff (founder of Wolff Olins): “When you include the extremes of everybody, that’s to say differently abled people of all sorts, then you produce things that work better for us all.” If designers create work for those with different vision abilities, their designs will be more legible for everyone.
What it does
By simulating different types of vision, the app increases designers' awareness of abilities different than their own. The designer using the app experiences it through a Google Cardboard. When the app launches, the designer selects the visual ability they would like to experience and learns key information about that ability. They can choose viewing the world from the perspective of someone with color blindness, peripheral vision loss, or contrast sensitivity. If designers start to view their work through this lens, they can adjust accordingly to be more inclusive.
How we built it
Camera Used built-in camera of Android phone to extract video data.
Filters/Image Processing Used OpenCv and Python Image Library to create original filters from color-blindness calculations. Transformed those calculations into Java.
Combining the camera and filters Converted the data from the camera to images, applied filters, then pushed new data to the user’s screen in real time.
Cardboard UI Used Unity and OpenGL to create the 3d interface through the Google Cardboard. Used phone’s accelerometer to make the user feel like they are in this augmented reality.
Design Used Sketch to mock-up final UI, that will be implemented at a later time.
Challenges we ran into
- Google Cardboard has little documentation
- Live streaming and live processing data in real time causes lagging
- Live image processing in Android is not generally supported
- Calculating the color-blindness values
- And lastly, of course, time
Accomplishments that we're proud of
- Focused on end users needs without getting too caught up in the excitement of technology
- Learned new techniques
What we learned
- Cross disciplinary teams are key to success
- Android Image Processing
- Developing for the Google Cardboard
What's next for Insight
After the hackathon:
- Finish UI
- Add pages with information about the different abilities
- Introduce the app to designers and get feedback
Log in or sign up for Devpost to join the conversation.