Inspiration

What it does

How we built it

Inspiration

We wanted to find something we'd find both challenging, useful, and fun! We're both Bilingual so figured we should do something with language. After a lot of brainstorming, we came up with this.

What it does

It trains an AI model from scratch using a dataset of ASL Signs then assigns 21 landmarks to each of those images. The landmarks are then compared to landmarks obtained from the Laptop camera, and the AI makes a guess on the sign the user is presenting.

How we built it

We decide on Python since it has the most robust libraries and we would be able to focus on implementation rather than wrestling with syntax.

Challenges we ran into

We really struggled with both implementing the learning model and extracting the dataset. Simply finding a good dataset took a long time, downloading and extracting the landmarks from it took even longer, and wrestling with Git and creating a CSV in the format my teammate could use took even longer than that! Implementing the model was a massive struggle. We had already been coding for 4 hours straight and spent the next 5 hours coding the model. Neither of has any experience with building AI, so it was a learning journey!

Accomplishments that we're proud of

We got a working model! It isn't exceptional, but we assembled every piece of the project we wanted.

What we learned

We learned tons of python tools such as TensorFlow/Keras, MedaiaPipe, Kaggle, OpenCV, NumPy, and Pandas. We also learned how to work with AI tools and train and test them.

What's next for LANGbot

  • Implementing an AI to understand when the user has finished a word, then utilizing Text-to_Speech to say each completed word out loud.
  • Handling dynamic signs for letters such as J and Z that require motion
  • Two hand support
  • Mobile or web-based deployment for broader accessibility
  • Expanded dataset for higher accuracy and robustness
  • Improved detection

Built With

Share this project:

Updates