Learning sign language can be a difficult experience. Unless you have a dedicated tutor, learning complex 3D gestures from a book is impossible and leads to errors in the nuances of ASL.

Using Kinect, we can use gesture recognition and IR 3D modeling to interpret sign language gestures, among other applications such as military hand gestures, to easy and accurately teach them.

While the Kinect currently does not support individual finger tracking, we hope in the future to write an API that will use the raw input data to interpret individual fingers and become a fully featured sign language interpreter and tutor.

Share this project:
×

Updates