Inspiration

1) We have created a unique device that allows the blind to walk freely and notice obstacles similarly to how a person with functioning senses would 2) Blind people commonly use walking canes (probing canes) for avoid colliding with obstacles. Carrying the cane around can be a nuisance and does not free the hands. It is incapable of detecting objects in common conditions, such as a horizontal pole raised above the ground. 3) Ultimately, we believe this technology gives wearers confidence to roam more freely. After witnessing blind students' difficulties navigating around campus, our achievable goal here is to replace the cane for walking, free a blind person's hands, and inspire a strong sense of pride in them.

What it does

Now You See Me provides a wearer a relatable infinite-resolution (hampered first only by human physical limits) grid that allows them to more enjoy walking as it is for everyone else, with out final concept device being a nearly invisible device that does not draw attention to the user like a cane does.

The most important information is what is directly in front of us (and our feet!). For this purpose we use both depth (for distance to object and warning), and a camera for object recognition to differentiate humans. Given our combined objectives, (to give important spatial information, notify of people, and also gauge proximity to objects) this is how our technology differs from the rest. We would use an electrode array with mathematical tricks inspired by Neuroscience for stimulation to create a "fake" feeling on the person’s skin (effectively creating a tactile interpretation). When a person is detected and we are looking at them, motor vibration feedback occurs to use emotions from Google Cloud Platform.

How I built it

We built this technology to be fast, unlike only using motors as feedback. Our idea is small enough that could be used on a more convenient part of body. We also wanted it to be modular. Considering that people also have varying levels of sensitivity, which research can prove, the ability to add more or less electrodes. Studies interpreting vibrations as a form of navigation information agrees with our notion of vibrating in regards to a nearby object to signal avoiding that direction. In the end we were able to make a prototype of a system that would contain a dual camera (for depth and objects),open CV code for detection of the object and a shock pattern along 4 electrodes that would help the person to know the location. The depth would then be impltemented through vibrations to give a more instant reaction stimulus -works completely offline to be fast and in real time -uses both depth and computer vision -cloud to continuously update, with a continuously updated model to get better, but not reliant -rapid, haptic feedback and patterns to be interpreted

Challenges I ran into

There is a tug of war for improving the situation without the complexity of describing the entirety of the visual field. -deciding the best way to represent the visual field without being too complex while still signaling key information. Whether to use a grid or a graded system. -finding hardware the would allow us to use proximity and computer vision to simulate an eye. -the depth percepting AI program

Accomplishments that I'm proud of

By being modular in nature, we are able to make the device meet the needs of the user depending on degree of visual impairedness. A way in which this device is also relevant to everyone is that this device is capable of depth-perception, which provides a way to navigate the dark, while also using camera feedback in daytime, provided object recognition / feedback for object detection for navigation. The depth of the research we did to take into account various aspects, such as case-by-case scenarios (the possible option to switch functions on and off), learning time, sensitivity difference, and the best ways / places to relay camera information tactically. Works with cloud to keep getting better

What I learned

We did a lot of research to understand the mechanisms previously used, and understanding the different aspects of relaying information in a tactal manner while avoiding as many drawbacks that could change from case-to -case as possible. Balance cool technology to be practical and avoid the trap of not keeping the user’s best interest in mind.

What's next for Now You See Me

You can have three motors to give left, middle, right sides that vibrate when something is proximal. in this way our project is innovative because it offers two modes of information, depth and objects through motors and shocks Use for detecting sidewalks, depth perception to inform where to step Detect people, object detection for hands for application in like hand shakes and reaching out to hand something. We can make some more amazing systems - actually put deaf and blind people in communication by using leap motion! Leap motion has 10 finger detection and has been started being used to translate sign language. A big aspect is how to relay the information to the person, below, how both tactile and auditory preferred BUT THE BIG ISSUE with auditiory is that it takes away information the person could be getting by hearing what is around them (like a car coming down the road). All this information would presumably get very confusing, and constantly having to shift attention between machine information and people speaking to you could be difficult. THAT IS WHY we have also devised a method to solve this issue. A sensor based system that allows the user to selectively use recognition tools to distiguish between objects being relayed to them tactically. Help someone to navigate the world is how we are social creatures, so beyond audio relay, being told if there are people around you is important. We wouldnt want this to be too overwhelming either though so this could be controlled by a switch since in a crowded space this wouldnt be useful.

Share this project:

Updates