Our inspiration came from real life problems surrounding the inability for people to communicate if deaf or struggle with speech, such as people with autism. Our inspiration also came from watching the special gloves that can control a mechanical hand. With a combination of these two, we came with our idea, of having motion signaling, similar to the glove, but signaling different signs in the American Sign Language and having the computer recognize the different signs using python, open computer vision and keras. This would help solve the problems of the inability to communicate. What our program does is that it takes in the input from the camera through open computer vision, and through a series of code this would be translated into different letters. This allows for people in FaceTime calls or over camera calls to communicate despite the language barriers. We faced many challenges along the way, starting with open computer vision, and having to learn the basics of this so we could apply it to our project. We also struggled with the lighting and background, as those really affected the output of our code. Furthermore, both of us lacked high expertise in programming so we struggled throughout the day, but as a result we both were able to learn as much humanely possible in this time. We were both really proud that we were able to finally overcome these obstacles, and be able to learn so much in less than 12 hours. We both learned a lot more about open computer vision. What's next for Technology and Sign Language is to have all our output stored and displayed in text, similar to how voice text works in messages today, but we want this to happen for sign language as well.

Built With

Share this project:

Updates