Inspiration

As translating apps get more and more accurate, covering a large variety of languages, we realized that sign language was always neglected by those services. We believe that sign language exists to help a community that struggles with traditional communication connect with others. However, a major challenge remains - most people don’t understand or use ASL, which defeats the whole purpose. We believe that ASL should be easy to learn and widely accessible. As a result, we decided to to take action using our expertise. Introducing ASL-Bridge - where we “bridge the gap between sign language and spoken communication.”

What it does

ASL-Bridge translates sentences from American sign language using computer vision and machine learning into spoken English.

How we built it

ASL-Bridge was built using Python, using the Open-CV library for computer vision. We built a machine learning model, to detect various ASL words. Other libraries, APIs and AI models were used to: highlight arms, hands and head, generate voices and decode sign language grammar to spoken English.

Challenges we ran into

It was our first time building such machine learning model, making it accurate was really challenging.

Accomplishments that we're proud of

One of our accomplishments was to learn a little bit of a new language while working on this project. It allowed us to explore a language we were not familiar with. Another accomplishment was to be able to finish a project using a technology we never used before.

What we learned

During this project, we learned a lot about machine learning. It is a topic that is really important nowadays and we are happy that this event allowed us to give us time to learn a lot on it. We also got to learn a few words in ASL!!

What's next for ASL-Bridge

ASL-Bridge is currently a prototype, the goal is to provide a real time translator, like Google Translate for sign language.

Built With

Share this project:

Updates