Inspiration

We were inspired by the live translation of spoken languages by the pixel earbuds and wanted to implement a similar feature for gestural languages as well.

What it does

It take feed from a webcam, sends it to a server, which extracts and interprets a hand gesture, then prints it to text. We currently have ASL letter gestures.

How we built it

We used the TensorFlow and Keras APIs to perform image recognition, and OpenCV to extract the hand gesture from a video feed. We tried to used Google Cloud Platform to facilitate model training using GPU acceleration.

Challenges we ran into

The dataset we had was extremely large, but only contained a few people's hands. We found the model often began overfitting to a few limited values. Furthermore, we were unable to perfectly merge the frontend and backend, preventing the translated information to be pushed back to the website.

Accomplishments that we're proud of

We implemented a live video processor/extractor.

What's next for signCV

We plan to train the model using a larger, more varied dataset.

Built With

Share this project:
×

Updates