Inspiration

We had some friends who went to an elderly centre, However, they were unable to communicate with some of the elderly who are hard of hearing. Hence, we wanted to develop an app that could quickly determine the sign language gesture

What it does

You take a picture of your hand with the built in web cam app. Our AI will then run on the image and determine the image witha varying degree of accuracy

How we built it

We built our own machine learning algorithm on top of Google Inception. Google inception is an image classifier which classifies objects in an image to many classes(categories). By using a pre-existing image classifier (Google Inception) for the base of our machine learning model we were able to get a higher level of accuracy than just using our classifier. Since every machine learning AI requires traning data, we used the MNIST American Sign Language Database and the ASL Fingerspelling Dataset. The model we used is a convolutional neural network (CNN), which is a network model type specifically built for Image-based Neural Networks, We modified this model to suit the needs of our AI.

Challenges we ran into

At first the task seemed impossible, we went through 8 different AIs in the first couple hours(all failing miserably). However, finally after 6 hours we found the way to create the model. However the accuracy was quite low, which was the next problem. After researching for the next hour or so, we found this method used by industry proffesionals called transfer learning. a transfer learning model uses a pre-existing AI as a base of the model and then you can put your own model on top of that. The reason why this works is because the Base AI will first determine the location and the shape of the hand and tag it. Our AI will then use that data and then determine the gesture and hence the sign language symbol. This was quite a complex task to do, but we finally managed to do it. This tremendously improved our accuracy and solved our problem. Another problem was that so many of the ASL symbols look very simple and it was quite hard for our ai to differentiate

Accomplishments that were proud of

We were able to solve most of the problems with our AI and were able to do something that was really complicated in our minds

What we learned

We learnt more about tensorflow and keras and we learnt to never give up

What's next for Inauritus

Currently due to so many ASL symbols being so similar it has a couple identifying problems, in the future, if we have more time we plan to train the AI on a larger dataset for a longer time.

Side Note

All files on github, Too big to upload on DevPost.

Built With

Share this project:
×

Updates