Inspiration
We noticed that many AR/VR apps didn't make experiences for people who were hard of hearing or hearing impaired. AR/VR is the future - and should be accessible for all. Given that the theme of the MIT Reality Hack was "Building a Utopia" we wanted to follow the old adage "the world is what you make of it" and create a world where anyone could easily create accessible and enriched AR/VR applications.
What it does
The Accessibility Toolkit will create context aware subtitles for VR/AR projects. This will encourage developers to include subtitles for those with hearing impairments since we know that the process can take a long time. By making the subtitle creation process more efficient we can have developers make their projects more inclusive for all users.
How we built it
We built it using the Unity package manager architecture. We made two repos, one that was the public facing package that people would install and another that was a demo app that used the package.
Challenges we ran into
Spatial context aware subtitles are very difficult to position in 3d space relative to the user as it requires a lot of spatial math. This is something we plan to address after the hackathon.
Accomplishments that we're proud of
When we conducted user interviews we found that a lot of the teams here at the hack would be very interested in using our toolkit. The interviews expanded further to where we found how our work would affect Hard of Hearing and Hearing Impaired people. But really, in the end, the hard of hearing folks we spoke with taught us that Accessibility isn't about making your app more accessible but making your AR/VR experiences better.
What we learned
Subtitling isn't just for Hard of Hearing and Hearing Disabled folks! It can be used by able hearing individuals and is very useful for internationalization and experiences where you're going to a country which you're not fluent in. Subtitles are just a useful feature in general.
What's next for Accessibility Toolkit For Unity
We have a large backlog of features we would like to implement like SRT file parsing, localization, dynamic context based positioning, better support for fonts, script event blocking and cloud based text to speech.
We also have plans to add additional features not specific to hearing impairment like a magnifying glass for seeing objects at a distance, and haptic feedback for tooltips on smaller objects.
Log in or sign up for Devpost to join the conversation.