My team wanted to help the visually impaired. We realize this has been done through object recognition and avoidance in the past, so we decided to focus on an specific idea not done before. Our idea was to use the Kinect V2 for object recognition and depth data, but for small objects and tracking. The user would be able to say in an earpiece, for example, "What is in front of me?" Then, a voice will be played through the speaker: "Object A, B, C, and D." Now the user is aware of their surroundings. Then the user can say "Find Object A" and whether this be a cup, prescribed medicine, or any other item around them, the program will begin navigating the hand towards to object. In order to make this intuitive as possible, we created a vibration sleeve with 8 motors, implemented with a Myo as well. By combining the two, we could create an intuitive way that directs the user through vibration regardless of the arm orientation. Instead of using previous methods such as information through sound, we use vibration so the user is not distracted and can continue his/her conversations as normal.
What it does
For our demo purposes, we decided to use a webcam and object recognition without depth data because we ran into problems with the Kinect V2. We got a prototype working using a 2d camera, however, and can show our functionality for future development. Using the object detection and hand tracking, we can change the vibration in the sleeve to correspond to the direction the hand should go. It is very intuitive to direct the hand where to go.
How I built it
Our team built this used many aspects to accomplish this task. We used the library Voce for speech to text and text to speech. We used to webcam and OpenCV to do filters and object recognition. We also implemented object recognition using HAAR classifiers to train objects to be recognized. After getting recognition of an object of choice and a hand, we compute the distance of the x and y axis. Unfortunately, we do not have the depth data. However, the hardware still accounts for this difference in order to be used in future development. Using serial data transmission and the Myo connected with the computer as well, we are able to pass data to an Arduino which controls the vibration sleeve. We also added many precautions such a Kinect V2 helmet mount to show where future development is going. We also are working on including Firebase.We plan to host a server of .xml image recognition files.
Challenges I ran into
Our team ran into challenges with configuring the Kinect V2 to work with OpenCV and accessing the depth stream. We also had compatibility issues using OpenNI. We worked through the issues to develop a prototype that still shows our concept without using the kinect or depth data.
Accomplishments that I'm proud of
I am proud of thinking of a performing a very intuitive way to help the blind. I like that we are creating something to help the visually impaired perceive the environment around them in a new way. Through this, a user will be able to instantly recognize objects around them and easily maneuver towards them.
What I learned
Through this project I learned a lot about vision processing and c++ in visual studio. We did a lot of work implementing the software with hardware, and I learned a lot through this. I created the arduino vibration sleeve, and I learned through my teammates a lot with the programming side. Are team also learned the difficulties involved with vision processing and object recognition. I think we all learned valuable skills that we can later use to improve the project.
What's next for Innovation InSight
We plan to improve everything done during Mhacks to continue the help for the visually impaired. We plan to get this working with the Kinect V2, using depth data and mounting the kinect to the helmet a a first person view. We also plan to host a database of uploaded .xml files for object recognition. We also plan to use this technology in smaller applications when kinect technology size is decreased. This way we will be able to use less power and keep it lightweight.
Log in or sign up for Devpost to join the conversation.