We wished to build a meaningful project with novel technology that would improve peoples' lives.
What it does
It uses Microsoft Kinect to detect depth in an image, and allows the carrier of the tablet to feel a friction-modulated representation before him or herself of the region in front of him.
How we built it
We used the Microsoft Kinect, although because the newer models were unavailable, we borrowed an older one from a friend on campus, found an old API, and used it along with OpenCV to run a Flask server with OpenCV on an x86 system, and streamed the video to a Nexus tablet with a haptic feedback screen.
Challenges we ran into
- The lack of Kinects and long unavailability of Tanvas
- old library for Kinect
- new, somewhat undeveloped programming + operation of the Tanvas
- connecting video stream from server to tablet
- convert from grayscale to sparse points for Tanvas ## Accomplishments that we're proud of
- learned new technology
- used a variety of languages ## What we learned
- android and Kinect development
- new technology ## What's next for Touch Vision
- hopefully, to inspire others to use new tech to help the people of the world!