Inspiration

Millions of individuals around the world communicate using American Sign Language, or ASL. Despite this, awareness and understanding of ASL remains low among the general public, putting individuals and entire communities at a serious disadvantage.

Introducing Eyesign.

What it does

Eyesign uses two tensorflow trained models to recognize and translate ASL into text rapidly and live, using only your phone camera.

How we built it

We used Tensorflow/Python/Keras on the backend and Dart/Flutter on the frontend

Challenges we ran into

None of us had much experience working with Tensorflow from scratch before, so definitely figuring out how to properly implement it was a challenge.

Accomplishments that we're proud of

Training our own models from scratch!

What we learned

We learned a ton about ASL and how important it is to communities that they be understood - from a software perspective, understanding ML at a low level and writing an app with Dart were definitely new skills we gained from this project.

What's next for Eyesign

Adding more signs! Currently it can understand someone signing the alphabet, but nothing else as of yet.

Share this project:

Updates