Inspiration

We all were interested in image classification and creating a convolutional network, and decided that the ASL dataset presents both a good learning and community challenge.

What it does

  • This model attempts to classify individual images into letters of the American Sign Language alphabet. The training data is a large set of sign language images

How we built it

  • Using convolutional neural networks on jupyter notebook with python

Challenges we ran into

  • Selecting one project out of all the various topics in data science.
  • Learning new things.
  • Successfully implementing a loss function.
  • Translating the image data into a formatted input for the training model.
  • Debugging the code
  • Understanding how to use pytorch’s CNN model
  • Over and underfitting.

Accomplishments that we're proud of

  • it works
  • _ mostly _

What We Learned

  • We had little practical knowledge coming in. We learned almost everything we used today, everything from data cleaning to over-fitting and RELU.

What's next for Sign Language Image Classification (SLIC)

  • Complete validation and testing.
  • We have neither time or resources to implement SLIC into video formats, this is a future goal.

Built With

Share this project:

Updates