Inspiration

98% of deaf people do not receive education in sign language. 72% of families do not sign with their deaf children - which is heartbreaking. These and many more facts inspired us to create a better way for non-verbal communicators to express themselves in a personal and professional context.

What it does

Captures the gestures/signs through the webcam feed and labels them in real-time. It then shows the sign to text translation on the top of a bounding box.

How we built it

  • Sample data creation
    • We took 15 images for each sign and used 13 for training and 2 for testing
  • Data preparation
    • We then did data labeling and cropped the images to take only the image of the hand (bounding box)
  • Transfer learning
    • We used a pre-trained model and adjusted it to fit our needs

Challenges we ran into

It was our first time trying to work on ML/DL techniques that we didn’t know before. Also, we had trouble creating the dataset to train the model accurately. The whole process was time-consuming and rather slow.

Accomplishments that we're proud of

We were able to get a good confidence level on the model and also managed to deploy the React app. The process of building the project took quite some time but at the end, we were able to achieve the results and accuracy we wanted to.

What's next for iSign

Adding more signs and maybe even a dictionary, add text-to-speech so that the generated text can be said aloud, having a more accessible UI

Slideshow

link

Domain.com

isign.space

Built With

Share this project:

Updates