Inspiration

A manga called Silent Voice made me realize just the extreme problems with lack of communication

What it does

Currently it translates sign language characters to characters.

How I built it

Using python, numpy, tensorflow, scipy, opencv

Challenges I ran into

I never had any experience with machine learning and it was difficult to build my own model. I initially tried building a k-NN model which provided fairly good accuracy of 70-80% (never really could tell because it took too long to test all 2000+ tests from the data set, so randomly selected 50 test and ran the algo on a couple times). Unfortunately K-NN is very costly and when I tried to connect it with the opencv program, it would cause the program to lag very badly, so I had to switch over to a learning algorithm (CNN). I used tensorflow and a lot of help from other resources to build it with an accuracy of 89% using test sets from the same data set. However even with 89% accuracy the program would have problems accurately selecting the correct sign, but this is due to the fact that the training sets were of very poor resolution. In other words, the data set I found was only 28px x 28x, which caused my program to have a terrible accuracy.

Accomplishments that I'm proud of

I was able to build my first model, though I am not very familiar with the underlying details of the model.

What I learned

I learned that k-NN is a very costly algorithm, and that it is important to select a data set of good resolution.

What's next for Sign language detection

In the future I want to translate random characters into words which will form sentences. Then the program will 'say' the sentences. I also want to improve the model's accuracy by selecting a better dataset to create the model from

Built With

Share this project:
×

Updates