In the United States alone, there are 3.2 million people living with visual impairment, and that number is expected to more than double to 8M by 2050. XR has made leaps and bounds to improve accessibility for everyone, but it is not without its blind spots.
Current solutions that allow low vision individuals to access their environment are purely functional, navigation-based applications that only serve to warn them about potential danger ahead, and they rely on restrictive technology that only performs in controlled environments. We have a bigger vision.
Introducing BEN, a new solution that gives the visually impaired the ability to turn landscapes into soundscapes. Our solution uses a depth sensing camera, enhanced with a real-time machine learning algorithm, to identify objects and convert them into a harmonious soundscape as the user moves through the environment. Every stationary object in the environment is associated with a unique ambient sounds, while haptics indicate whether they are interactive, providing a one-of-a-kind beautiful synesthetic experience.
We imagine a future where all community spaces have their own BEN libraries of beautiful tonalities, not only encouraging BEN users to participate in these experiences, but giving these spaces another platform to identify themselves and reach their communities. What's more, BEN will eventually have an immersive multiplayer mode where users can interact not only with their environment, but associate unique sounds to people that BEN recognizes.
BEN is designed to be scalable, adaptable, and easy to use, allowing the visually impaired to access the beauty of the world around them in a whole new way.
Log in or sign up for Devpost to join the conversation.