Inspiration
Our community inspired us to build this project. We know that not many people use sign language to communicate, but it is very beneficial for sign language users to talk to others without pen and paper. Everyone wants to connect with others, and the best way to do that is through communication.
What it does
Our website can take a picture of a hand sign and convert it into an English letter. Users can upload a file from their computer, or even take photos with the website on their phone. The corresponding letter will be shown on the screen.
How we built it
First, we imported many libraries including sklearn and keras for our CNN model. Next, we wrote a function to load the dataset to label them correctly from 0 to 28 which includes the 26 alphabet letters, del, nothing and the space signs. Next, we splitted our data into testing and training data with a ratio of 2:8 and created our CNN model by using Sequential(). In order to make the training quicker, we reshaped the image size smaller. Our model contains 11 convolutional layers and outputs 29 layers using softmax activation. Next, we compiled our model and trained it. We also calculated the accuracy score to have a feeling how accurate it is. To use it in our website, we linked it to our HTML and since training the model everytime is too time consuming, we saved a trained one with an accuracy of over 93% and loaded it in the website.
Challenges we ran into
We struggled on the AI part of the project. It was challenging to label the dataset correctly and we had difficulties with reshaping our images in our CNN model since some modification took place with the new tensorflow. When installing libraries, we also encountered several problems. Finally, the training process takes way too long so we decided to build a model, save it and load it when we need it.
Accomplishments that we're proud of
We are proud of our design and model. For the design, we made several prototypes and sketches and picked the best one to add to our project. We tried many different styles, color schemes, and varieties for our design to make the final version of the website. For the model, we successfully made it so that the model will run before the website is opened, so opening the website will be very quick. This also allows for a higher chance of accuracy. We spent a lot of time doing the best we can at the model, and we’re proud that it turned out the way it did.
What we learned
During the workshops, we learned snippets of different branches of programming. From website building to data management, we learned how they can be applied to different kinds of situations. We also learned a lot during our process of creating the project. We made a lot of mistakes, such as re-computing the AI every time our website loads, or accidentally covering the button with text and wondering why we can’t click it. From those mistakes, we persevered and learned more specific technical things like how we can display the letter we got form our code onto the website.
What's next for ASL to English Converter
We want our program to have more functionalities, such as Real-Time Object Detection. It can record and analyze the data simultaneously, so it will be way easier and faster to translate. We also want the website to be more mobile friendly, so future users who like having it on a phone can have a better time using our website. To achieve even more for our goal, we want to publish training videos and articles to help people who are interested to learn. The website will be educational and functional at the same time. Lastly, we want the site to be easier to navigate and to have more of an introduction to ASL, as opposed to jumpring right into the functions of our website.

Log in or sign up for Devpost to join the conversation.