All Videos:

link

Inspiration

I know a few people you are deaf and hard of learning, and I have observed how hard it so for them to communicate with people who don't ACL. It would truly benefit them if they had an app that translates between their signs and live text, but unfortunately, no such app is widely available.

What it does

The app uses Jupyter notebook and open cv to display a web app camera that will track the user's signs and display the English translation at the bottom of the screen based on the pre-processed images that have been fed in. This way, the user can communicate their message easily by just using their device. Currently, the app supports the 100 most common English words.

How I built it

I started by using Jupyter notebook to make an image collector script that displayed a camera that took pictures of the signs I was doing and saved them into designated folders on my computer. Then, I ran the images through a python package called Label Img (link) to indicate which part of the image the program should look for. Then, I wrote a Jupyter notebook script that displayed a camera and used the images to determine if the user is displaying one of the hand signals using tensorflow.

Challenges I ran into

At first, I ran into many challenges installing and running the programs and dependencies that were required. Once I set that up, I was faced with a steep learning curve as I didn’t know where to get started with image recognition. I found a boiler plate for object recognition that helped get me started with using jupyter notebook, and a quick guide on CNNs and explained me how they work and pointed me to using tensorflow. Once I got started, I faced quite a few struggles with various errors, including getting the pictures from the Open CV frame to save correctly and getting the application to be accurate.

Accomplishments that I'm proud of

I’m really proud of myself to see this project to completion, especially considering this was the first time I had made a project with machine learning. There were many points along the way where I thought that I should tone down my project or start over with an easier one, but I persevered and was able to successfully meet my goals.

What I learned

Because this project was my first venture into image recognition, and the first time I worked with machine learning past MNIST digits. I was able to discover the many potential uses image recognition has up transform our world, from communication to healthcare and transportation. More specifically, this project introduced me to convolutional neural networks (CNNs), useful python libraries such as open cv and label studio, dealing with large amounts of data, and building algorithms using tensorflow .

What's next for SignTalk

Due to time constraints, I wasn't able to deploy the app to Xcode and display it on phone, so I hope to do that in the future so that the app is more accessible. Additionally, I hope to add closed captioning so that the person the user is trying to communicate to can better follow their message. Finally, I want to expand the amount of words the program can understand, perhaps through a web scraping algorithm to collect images.

Note

Because the project is currently set up on a local host on my computer, I am unable to share a link to the demo of the project.

Built With

  • anaconda
  • jupyter
  • labelimg
  • opencv
  • tensorflow
Share this project:

Updates