We take things for granted every day - the homes we live in, the technology we have at our disposal, etc. However, we decided to tackle a problem at an even deeper level, creating Vysion, a text and object recognition camera that can help inform the blind/visually impaired about their surroundings. While solutions such as braille exist today, we believe that they are extremely ineffective. Not only do the significant majority of situations not support this alphabet, those who are visually impaired often have difficulty finding instances of braille, rendering the 'solution' useless. Vysion can help revolutionize the way the blind/visually impaired interact with the world.
What it does
Uses a mobile phone's IP webcam, scrapes for an image, uses Clarifai API and Google Vision for object/text recognition, sorts by level of confidence, and auditorially outputs the result. The current prototype uses a phone attached to an open GearVR, but the goal for the final product is a seamless, sleek pair of spectacles with a built-in camera.
How I built it
Vysion is a combination of node.js, HTML/CSS, OpenCV, Google Cloud Vision API and Clarifai API.
Challenges I ran into
- We didn't know what to make
- The camera wouldn't work
- Callbacks stopped working
- We had 428102 collective tabs opened
- The Clarifai API doesn't allow for local image addresses
Accomplishments that I'm proud of
- We did something and then it worked
- We didn't sleep. Like at all.
What I learned
- We got better at node.js
- We learned how to use cool APIs
What's next for Vysion
- We hope to integrate Vysion as an extension for the Snapchat spectacles, so it can be more accessible and help more people.