Inspiration

Imagine this: a blind person walks into an unfamiliar space. They start to feel around the perimeter of the space to familiarize themselves with it, as they normally would, but then stop themselves: today, they have something better. They turn on their Magic Leap. Within a few seconds, it's built up a spatial map of the environment, and starts communicating key information to them. They confidently stride through the middle of the room, moving around obstacles and heading straight for their goal.

Magic Leap Can Help

When sighted people walk into an unfamiliar space, our eyes quickly tell us almost everything we need to know. Within a second or two, we understand where the walls, doors, and furniture are located, and we could probably dash through the room in an instant if we needed to.

The visually impaired don't have that luxury. With a few notable exceptions, they have to slowly feel their way around a new place to get an understanding of its layout using touch, hearing, and smell. They risk hitting their head on hanging lamps or low doorways that their hands or cane don't catch. They are forced to take things a little more slowly and cautiously than the sighted.

Research such as that of Lahav and Mioduser (2004) has shown that tools which can give blind users additional spatial information can lead to big improvements in the time it takes to get familiar with a new area. Tools like Aira and Be My Eyes do this by borrowing sighted people's vision, while ones like OxSight amplify contrast or color.

With Magic Leap's capacity for spatial computing, however, we have the opportunity to truly digitize the space around users and provide information based on a real 3D model. Most Magic Leap apps are based on showing users things that aren't really there. Ours will help show what is there, acting as a pair of eyes in order to let the visually impaired navigate with the same confidence as the sighted.

With the Magic Leap's versatility, we can also explore other visual information normally denied to the blind, such as communicating color or reading text. We'd also like to explore the possibility of avatars such as a mixed reality guide dog that could help highlight salient points of the environment for partially sighted users.

The Team

Dylan Fox is a UX designer and VR/AR specialist. He spent the summer designing and developing a Microsoft HoloLens application for Siemens, and is currently working on a capstone project at UC Berkeley examining how XR can be made more accessible. He's also published and presented research on Virtual Reality as a tool for engineers and 3d artists at the HCII International 2018.

Soravis (Sun) Prakkamakul is an engineer and creative technologist focusing on mixed reality and assistive technology. He was the first AR developer at Digimagic, a creative agency focusing on events and interactive installations. In 2016, he helped Meditech, a Thai assistive tech startup, to build a communication aid device for people with ALS. Heā€™s currently working on a research project at UC Berkeley to improve text input in VR. Also a hackathon lover, he was awarded at hackathons by Techcrunch, SXSW, SF Music Tech Summit, and Intuit.

Built With

+ 1 more
Share this project:
×

Updates