We know that if gets difficult to communicate with students that are hard-of-hearing especially if you don't know ASL. Also, if someone that has been communicating in ASL all their life wants to teach, all their students will have to be the ones that know ASL.

We want to get rid of this barrier and let everyone communicate.

What it does

Takes in a video of a person signing in ASL. Passes it to a fine-tuned VGG16 model Generates the features for the same Then passes through SVM to generate the captions.

How we built it

Trained a deep neural network on ASL alphabets. Created web server that communicates the video to the deep neural network and then obtains the captions

Challenges we ran into

Connecting the database to Google Cloud Computing Services. Installing OpenCv on the server Picked up on TensorFlow as no one in the team knew to code in TensorFlow.

Accomplishments that we're proud of

Successfully installed OpenCV on the server Created a django webserver which takes in a request from the client (in our case, a photo/video) and then would output the features of the hand gesture in the image/video. The features would be obtained by passing the image into a fine-tuned VGG16 model .

What we learned

Team spirit. Some ASL alphabets. TensorFlow.

What's next for ASL for all

Build complete system.

Share this project: