I met Yousef, A blind student at the University of Michigan when he asked me for help getting to his class a couple weeks back when Tim Caine came speak on campus. There were tents and vehicles all around the diag our main avenue between classes and Yousef couldn't navigate the new environment without bumping into everything. His cane might catch something but his head would catch what he missed.
He said new environments were almost impossible for him to navigate without getting lost or banging himself up a lot, even with his white cane. But in an age of autonomous cars and 3d sound shouldn't the blind be able to detect the objects around them?
What it does
We are using the Kinect to make a realtime 3D map of the immediate environment and create a sound map of the objects in it. Based on where objects such as walls, people, and everything else are in relation to you we ping the object with a sound in your 3D environment based on its proximity and direction from you. This allows you to track multiple objects around you and successfully navigate from point A to B without sight.
How we built it
We are using the Microsoft Kinect to make a realtime map of the objects in front of the wearer. We're using the Kinect's IR sensor to get the depth of everything in front of the wearer then bringing this into our algorithm so we can determine where potential objects to avoid are. We then put these coordinates into Unity to determine the sound to assign the object to create the 3D sound the wearer uses to hear where the objects are an avoid them.
Challenges we ran into
The Kinect makes a really bad 3D map and porting that into Unity was not an easy task at all and took us most of the time. Then trying to map the 3D sound onto the Kinect's readings didn't work initially because we couldn't port it into Unity and were trying to do it manually but without much luck. Finally we got it and were able to get the 3D sound map up and running to finally make the project work.
Accomplishments that we're proud of
We made something that can actually help really people, like Yousef! That feels awesome! Plus we overcame a bunch of obstacles with incompatible systems, new languages, and learned sound design. All in a couple days.
What we learned
Learned sound design to make something that you can bear hearing for extended periods of time. We also learned Unity, C#, Visual Studio, and basically converted from Apple to Microsoft for the past couple days to learn and execute on this empowering idea.
What's next for Sound Sense
We would love a chance to put this on a hololense and give it to Yousef to try out. We're sure he would be thrilled! We would also like to add a couple more features to make it even more useful for the visually impaired community such as taking a picture and sending it to DeepMind to tell the user what is in front of them. The second feature we would like to add is maps integration so they always know the direction they should be heading.