To keep connected people, using the resources that everyone has

What it does

A sign language translator that uses a webcam, and translates some signs to be more accessible to everyone

How we built it

Using python and python libraries (as OpenCV, Tensorflow, Image, etc) and azure technologies.

Challenges we ran into

How to identify one frame of the video, track the sign and do the approximation

Accomplishments that we're proud of

It runs without internet, and from a local way

What we learned

How to implement Tensorflow to the edge (to run offline and in a local way) and to implement Tensorflow with a video stream

What's next for Hands-On!

Try to upgrade the Azure model, to be either more precise and to have more signs

Built With

Share this project: