Sign 2 Speech can detects American Sign Language and translates it into spoken english.

We are using a leap motion, a basic HTML page, and a Raspberry Pi 3 which is mounted to a user using 3D printed clips. The leap motion video signal is sent to a desktop computer for image recognition and uses machine learning to recognize user's signs. The prototype is in a "wearable" form factor.

We leveraged LeapTrainer and open source project for motion recognition on the leap. We hope to further train the system to understand more ASL words and phrases.

Share this project: