Inspiration

There are an estimated 36 million blind and 217 million people worldwide with severe visual impairments. This represents a significant potential customer base that is currently being under-served by products and services designed for sighted users.

The increasingly complex and intricate web and mobile and desktop high-resolution GUI interfaces used today for project mnagement may seem natural and intuitive to sighted users, but the experience of visually impaired or differently-abled customers with products and services delivered through computing devices and interfaces requiring a high degree of visual learning and understanding, will always be sub-par.

It is true that GUI operating environments like Windows and MacOS and Android, and document formats like HTML and PDF have made huge strides in being accessible to visually impaired and disabled users. But user interfaces today still make fundamental assumptions about the medium of presentation of information and the mode of interaction with users, which can make using desktop or Web applications and forms or documents frustrating and time-consuming for sight-impaired users. GUI applications may conform to desktop accessibility guidelines and web-sites and pages and documents can follow recommendations like WCAG and ARIA to make applications and content accessible to a wider spectrum of users.. But navigation through desktop and web applications is still spatially oriented on a persistent visual surface and visual users can see and immediately memorize and rank in importance navigation elements like windows, menus, trees, buttons, text, headers et.al, while using a visual input marker like a mouse cursor to select the desired element or content they need. Information like calendars or tables or forms when presented visually use the visual layout as an important part of the meaning and applications rely on a user's ability to quickly understand how the visual layout of elements prioritizes information and the steps needed to complete a task or process.

Non-visual users who rely on assistive technology like screen-readers must often wade through a sea of elements and text before finding the desired function or content, and must rely on slow trial-and-error, repetition, and memory to be able to efficiently navigate a GUI. The increasing complexity of desktop and mobile GUIs today may benefit and seem intuitive to experienced visual users, but can also leave non-visual or differently-abled users far worse off than older interfaces. Today's complex and intricate GUIs make assumptions about the sight acuity or dexterity or short-term information processing abilities of users that can end up excluding a significant proportion of users.

Conversational user interfaces are among the easiest and most accessible forms of human-computer interaction and have seen a revival today on desktop and mobile devices powered by sophisticated natural language understanding and machine learning services running in the cloud. AI-powered voice-activated assistants like Alexa and Siri have finally given visually-impaired and elderly computers users an interface that feels natural and efficient to use.

But most assistants and chatbots and CUIs today still assume that the user can see the active conversation or activity or skill on-screen and can easily navigate and click on buttons or windows or other widgets when needed to complete an interaction. CUIs today used to access information like customer service may simply act as a director to widgets like calendars or web pages that are still heavily dependent on the assumption of a visual medium for presentation of information. For visually impaired users, a web page or calendar or task widget may cause a screen reader to flood the user with information with no way to filter or narrow down what the user needs actually needs. The closed-source nature of many of these cloud-based assistants means hacking on the software can only happen in a walled-garden that cannot fundamentally alter how the assistant works.

System administration and computer programming are popular career choices for visually-impaired people and many of these people learn to work at speed using screen-readers to interface with modern GUI development tools, but may heavily favor using command-line interfaces and console based editors and tools as alternatives to GUI-based tools. However command-line tools still have a far steeper learning curve for non-sighted users and require these users to be able to remember and input the exact syntax required for commands while learning to navigate and process a large amount of text output that may be generated that still use visual layouts for conveying meaning. Command-line tools used to administer complex tech stacks like RedHat's OpenShift are far more accessible than the GUI alternatives but still require memorization and precise input and navigation of large text output buffers and still still tend to stress the weaknesses of non-visual users rather than their strengths.

To adequately serve the millions of blind and visually-impaired potential users, organizations need to look beyond mere accessibility into open, truly inclusive interfaces that cater directly to the strengths of non-visual users while minimizing the weaknesses. Auditory user interfaces, like Emacspeak and other work pioneered by T.V. Raman can fundamentally change the user experience for millions of non-visual users and organizations investing in this technology.

Please see the GitHub repo and video playlist for more updates.

https://www.youtube.com/playlist?list=PLliQaMLXJLGqQrHuK-NilgxeAUTvoFzZ_

Built With

  • asr
  • csharp
  • cui
  • nlu
Share this project:

Updates