Inspiration

Our inspiration came from the discovery of the lack of teaching resources for hearing-impaired students and we hope that our project can help get a foot in the door of developing more ML related educational tools for them.

What it does

Our program uses computer vision and machine learning to translate American Sign Language into text on a computer. The goal of this program is to bridge the gap between those who can’t understand sign language and those who speak sign language.

How we built it

Our program is written in Python using openCV for computer vision and TensorFlow for machine learning. The hand detection is based on a MediaPipe hand tracking library.

Challenges we ran into

Despite our model and hand recognition working perfectly, the integration of the two resulted in difficulty distinguishing between some letters. This was due to our training dataset being composed of images with a white background and the live image consisting of a lot of noise. Additionally, our training dataset did not consist of enough data for the letter p so we had to curate more data for that letter.

Accomplishments that we're proud of

We are most proud of our model’s object detection as it can detect a hand that is over ten feet away accurately and wrap the image around in a box.

What we learned

We expanded our knowledge of computer vision and tensor flow, learning more about object tracking, neural networks, and machine learning.

What's next for ASLi

We hope to use ASLi in numerous applications in the future, such as mobile applications, accessibility accommodations for standardized exams, and live translation for sign language speakers.

Built With

Share this project:

Updates