Inspiration

We wanted to use artificial intelligence to create accessible communication between those who use ASL sign language and others who may not the language.

What it does

Using machine learning and artificial intelligence, our simple web-camera based system interprets sign language and translates it into text.

How we built it

We built our project by utilizing a Raspberry Pi, a USB webcam, and various artificial intelligence and machine learning libraries in Python such as (but not limited to) Google's TensorFlow.

Challenges we ran into

We initially wished to implement this project on the Raspberry Pi using QNX OS, however, it came to our attention that QNX OS in its current state lacks support for most USB webcams, which led us to using Raspbian OS instead.

Accomplishments that we're proud of

We are proud of the progress we made regarding all the machine learning and artificial intelligence training and the front-end design.

What we learned

We learned that there are many libraries in Python that make machine learning tasks much more beginner friendly.

What's next for Fluentify

The next steps for Fluentify would be to complete our ML training for better accuracy.

Built With

  • keras
  • mediapipe
  • python
  • tensorflow
Share this project:

Updates