XAble is exploring ways to close the accessibility divide within XR by visualizing an open-source toolkit for developers that helps them integrate support with cross-platform compliance to support users with disabilities

What it does

Our team created 5 demos to illustrate variety of ways to make XR accessible:

DEMO A - Low Vision Accessibility: Created a demo scene with 3 objects that can be better understood by a person with low vision. One object is active at a time and has a red outline to make it easier to see against the background. The active object can be brought closer to the user by holding the top button down on the Daydream controller. If they want to see the object from a different angle or the other side, they can simply turn their head before clicking the enlarge button and it will be rotated in front of them. These objects also give the developer the option of including alternate text and/or audio. When the user holds the button on the right side of the controller, the text is presented in large print one word at a time using a speed reading technique called Spritz. Each word has one letter in red font that is always aligned in the same location, the rest of the word is white font on a black background. The speed of the playback can be adjusted by the user in their accessibility settings. This technique allows a lot of text to be read quickly by both individuals with low vision as well as those with normal vision. If the object has an audio file, pressing the left button on the side of Daydream controller plays that audio. A future goal would be to use a text to speech service if no audio was specified. We would also expand the feature to use the camera to identify text in the real world and present it enlarged or as audio to the user in mixed reality.

DEMO B - Visual Color Accessibility: We explored and built illustrative proof of concept of visual color accessibility challenges, specifically protanopia one of the most common conditions of color blindness. We looked at this experience as a way at both a way for people to experience what it is like to be color blind, as well as a look beyond the current interface accessibility options and think of how the emergence of computer vision, ai, and sensors can start to open up the world in ways that were not possible. Namely being able to interpret greens and blues and overlay them with unique patterns that could potentially help someone start to perceive a difference while only still having sensitivity to two wave lengths.

DEMO C - We explored some accessible approaches for people with cognitive or memory issues, we looked at a solution involving computer vision and speech to text in order to help locate lost items or identify something that an elderly user would need but was unable to remember or incapable of identifying where it was.

DEMO D - Icons for Accessibility: We created an initial draft of an icon library that can be used cross platform to make accessibility features easier to include in an user interface, and more recognizable across multiple devices. These icons will be available after the hackathon, and hopefully will grow as we expand

DEMO E - HoloLens Adaptive Controller: The “natural UI” of the HoloLens is very engaging, but what happens when air-tap and tap & hold aren’t natural, or even possible? We wanted to see if an adaptive controller for the Xbox could be set-up to make experiences on the HoloLens more accessible. Our concept of a new UI paradigm to make HoloLens and MixedReality experiences accessible opens up new types of experiences.

We drafted a README file with Hackathon project overview and starting point for inviting developers into collaboration on accessibility for XR

How we built it

DEMO A - Low Vision Accessibility: This demo was built in Unity and prototyped on the Oculus Rift as well as the Google Pixel using Daydream. It started with a 360 image inside an aquarium that had a lot of small elements which may be hard to see for someone with reduced eyesight. My son, who is legally blind due to his Albinism, was able to help me understand what his acquity was like by telling me if he could see the difference between different versions of that environment. I used Photoshop to to create several versions with different levels of blur (Filter > Gaussian Blur). The different levels were 5px, 5px applied twice, 5px applied three times, 10px, 10px applied twice, 10px applied three times. For my son, he could not tell the difference between the high res image and the 5px blur. Knowing this information help us design aids to enhance the small content in the scene. I placed a 3D scan of a shark positioned to match other sharks in the image. I then wrote enhancements to enlarge the shark in place. This was good, by my son preferred to have it move up close so that he could observe all of the details. The next iteration was to move the Shark up close while holding down a button. As a nice side effect, we discovered that by turning his head, he could see the other side of the object when enlarging it. After getting one object working, I added two others - a small falcon and a handicapped parking sign. I added a method for selecting one object at a time and bundled all of the code into a unitypackage that could be imported easily into any unity project. Any object in the scene with the XableObject script on it would be selectable and accessible. Only the active object would be enlarged at a time. The thinking was to emulate the tab focus feature found in HTML for web pages. Next was adding an outline filter to highlight and indicate what object was active. Next, I added the speed reading feature to show the alt text in large print in front of the user. We referenced the speed reading app called Spritz and used the Unity Text Mesh Pro object on a white background. Spritz shows one word at a time with a single letter in red and the rest in white. Our brains are able to read much faster with this presentation and it enables a long message to be shown quickly to the user. I added that functionality to a new button press and the user could then select to enlarge the object by holding one button or speed read the text by holding another. Finally, I added a third feature to play an audio clip for the object. The next feature would have been to use a text-to-speech service if the audio clip was not specified, but the time limit was reached and that will have to be added as the continuation of this project.

DEMO B- Color blind augmented study and accessibility simulator: This demo was an attempt to help emulate the visual difficulties color blind individuals have in seeing their everyday world. This simple idea replicates some previous experiences around accessibility but brings it into a 3d photo scanned room. A GPU shader was written that is capable of adjusting the surface colors of the scanned meshes and mutes the red and green color channels to match the effect of a having protanopia. We further explored this concept and worked the shader to splice in two divergent patterns for both the red and green color spectrum and attempt to allow a color blind person to differentiate between the two with blended patterns. It will be interesting to build this out and see if it is actually effective for someone how experiences this disorder.

DEMO C - For this demo we used a combination of an object detection algorithm and a speech to text one. For the former we implemented YOLO against a camera feed successfully and accurately detecting a great number of objects. For the latter, we implemented calls to Google API in order to translate the user speech to text. Using that text we were able to connect both applications and isolate the objects that we were interested in localize.

DEMO D- Icons were created in Adobe XD and exported as PNG files for developers across demos to use. Future plans include making available in PNG and SVG as well as fully transparent versions.

Challenges we ran into

DEMO A- Getting the enlarged object to be placed in front of where the user was looking. Using two colors on a text-mesh-pro object. Rapidly prototyping on Daydream (finding the instant preview solved this).

DEMO B- Working the shader code to correctly simulate the visual effect of color blindness was tricky, but the real issue arose when the attempt to augment the green and red color spectrum with patterns. The effect works for now but It needs refinement and testing to figure out what patterns work best and what range of colors are useful for edge detection.

DEMO C- The api calls

DEMO D- How can we expand familiar icons to address specific accessibility features in XR? Coming up with symbols that represent concepts like color blindness, low vision, and manipulating a hologram (or other form of digital object) are an opportunity to build a new design language for accessible design.

DEMO E- While the Xbox controller is technically supported by the HoloToolkit, we encountered great difficulty triggering specific actions from the controllers. After consulting with Microsoft we believe it’s a temporary limitation due to the transition to a newer and better supported Mixed Reality Toolkit, and we look forward to exploring this after the Hackathon.

Accomplishments that we're proud of

We were able to use come together as a team quickly using Design Thinking as a shared language/methodology as well as draw from diverse range of expertise on team to imagine future possibilities.

We created an initial draft of an icon library that can be used cross platform.

We drafted a README file with Hackathon project overview and starting point for inviting developers into collaboration on accessibility for XR

Coming up with a new UX/UI paradigm for experiencing spatial computing and immersive VR using adaptive technology.

Implementing the speed reading in VR works really well and would be great for other use cases, such as reading news, books, websites, etc. This is great for full sighted individuals as well.

What we learned

XR is still a new medium and there are still a lot of accessibility gaps that still need to be addressed. We are proud of the ideas and demos we came up with in a weekend, but are more excited to continue building this out to address additional disabilities. XR technology has the ability to go beyond just making content accessible to everyone, it can enhance the abilities of users to better understand and interact in the real world as well.

What's next for XAble

Publishing on GitHub and sparking a broader conversation in the XR and accessibility communities to contribute and establish tools and practices. Additional documentation will explore how XR can viewed through different lenses of accessibility using the WAI standards as a starting point. We bought the domain name to promote our continued efforts. We will continue to evolve the README file as starting point for inviting developers into collaboration on accessibility for XR.

Built With

Share this project: