Inspiration

Translation technology is ubiquitous. Understanding English Chinese, French, etc. is simply a Google Search away. However, one language is left out - sign language. There doesn't exist any mainstream technology to convert sign language to a spoken language. As 3D Cameras, such as the Kinect and Intel RealSense get more and more popular, computer vision becomes increasingly easier. It is very likely that sign language translation will become mainstream in the near future.

What it does

We want to envision translation technology that will be readily available in the future. We built "Connect", a Windows Application that can convert American Sign Language to English in real-time!

How we built it

We used Microsoft Kinect to mine high-quality, infrared data from a video. We used a 3rd party API (Convex Hull) to draw contours around hands. We then used our own computer vision algorithm to track the hand movement and to distinguish between signs. We then fed the data to Microsoft Azure for machine learning using a multi-class neural network.

Challenges we ran into

It was incredibly difficult to install any software due to the poor wifi :( but unfortunately we were able to get it done in time. Apart from that, it was difficult to differentiate/delimit sign movements.

Accomplishments that we're proud of

We did all of this without knowing anything about computer vision or the Kinect as a two person team. We felt like we took on a really difficult task and learned a lot.

What we learned

Computer vision and graphics processing.

What's next for Connect

Better algorithm. Faster translation.

Built With

Share this project:

Updates