Inspiration

We want to provide a solution to give independence and confidence to the visually impaired. We want this solution to be affordable to people all around the world and we want to have as small of a footprint as possible. We want to challenge the difficult everyday problem for the visually impaired, of navigation and spatial awareness, with the new everyday technologies we all carry in our pockets today.

What it does

STIC began with a humble selfie stick. We added an arduino, sensors and vibration motors to the selfie stick to give the user a real-time haptic response of their surroundings. The sensors communicate with the motors to increase vibration output the closer a person gets to an obstacle. Then we added the cellphone to the selfie stick for it's mobile capabilities. We then used IBM Watson's visual recognition software to analyze pictures taken by the selfie stick and return what is in the picture. We finally added text-to-speech software to communicate with the user as to what is in the photo.

How we built it

STIC uses inexpensive ultra sonic sensors and vibration motors to provide real-time haptic feedback to visually impaired users. What separates our adaptive device from other innovative solutions, is that our device is also a computer vision system that leverages the computing power of IBM Watson. An everyday cellphone (iPhone) is mounted on a selfie stick and used to snap a photo with the click of a button. This photo is dynamically stored, not on the user's device, but on a free cloud service provided by Cloudinary. The URL to this photo is then handed over to Watson who interprets the photo, and returns the results are back to the user ,via Bluetooth headset and IBM's TextToSpeech service. The results include a dictionary of what IBM Watson thinks the picture is of and a percentage of confidence.

Challenges we ran into

-Integrating IBM Watson's visual recognition software with native iPhone development code. -Integrating Cloudinary image hosting services with native iPhone development code and IBM Watson, since IBM Watson's visual recognition capabilities only works by accessing image URL rather than local imagery data. -Setting iPhone's hardware beyond its default settings to serve our application and provide a user friendly set up and use. -Integrating and programming the sensor and vibration technologies with an Arduino.

Accomplishments that we're proud of

We created a piece of technology that could improve the quality of life for many people. At this hackathon, we really stretched our abilities and grew as programmers. Our team has come away from this experience with new skill sets.

What we learned

We learned how to use IBM Watson and integrate its APIs with native iPhone development code. We also improved our proficiency with circuitry and programming of an Arduino. We discovered integration methods of adding machine learning to embedded systems and how that can help disabled people improve interactions with the community. We especially learned how to build efficient technology using available resources in a limited time span.

What's next for Sensor Technology Integrated Cane (STIC)

We would like to polish the frame to increase usability. We hope that STIC can be brought to people in need and help them live their best life. We can foresee further usability improvements with human testing.

Built With

Share this project:
×

Updates