Inspiration

Our inspiration for this project came from when one of our group members mentioned disabilities. As we discussed, we particularly mentioned people who are mute, and since communication is strongly linked to Twilio, we decided on creating an application to help the mute. When we started thinking about the use of the application, we realized that the main problem mute people face are real time communication, especially with people who don’t know sign language, so we decided to create an app to help that problem.

What it does

Our application’s main purpose is to help people who use ASL. We use a camera to capture the user’s hand movements and translate it into text which it types and reads in real time. On the screen, it has an area for the text, and the view of the camera that can be hidden or shown. For convenience purposes, we have a button that can clear all the text, a function at the top which shows the letter it is currently recognizing, and the ability to send the text as an email, SMS, and phone call. The main purpose of this application is to have it convenient for people who use ASL.

How we built it

Our first step for building this application was to design the layout of it which was done on google drawings. After designing it, we then moved on to programming using react native for the front end. We used a Google teachable machine to train a machine learning model to convert ASL to letters. After that, we used Twilio to allow the user to send their message via email, text, or phone call.

Challenges we ran into

During this project, we ran into a lot of challenges, the biggest being that there was a lot we didn't know and had to learn in a limited time. For example, our app was built using react native which we never used, we had to learn to use Google's teachable machine to train a machine learning model to recognize ASL, and we also had to learn how to use Twilio which included the use of Heroku which was also new to us. Apart from those, our main problems were with all the bugs that we found as we programmed our application. Overall, this project was a tremendous learning experience for us.

Accomplishments that we're proud of

Through this project, we were proud of creating an application with most of what we used being new to us. We were also proud that we were able to work together cooperatively and finish the project on time.

What we learned

From this project, we learned how to use react native, Google machine learning, Twilio, and overall, improved our programming skills. We also learned that react native is useful, and that it is easy to get into simple machine learning with the Google teachable machine. From this project, we were able to learn a lot.

What's next for Aslify

For the next steps, we plan to add a text to speech feature which can allow the user to translate ASL to text first and then have the app speak it out. This will be helpful when the user is in situations where the user doesn’t have the time or convenience to do it real time. We also want to add the other sign languages other than just ASL as well as adding the sign languages that are for whole words instead of just letters. Another feature that we want to add is a practicing function so that this app can be used to train and learn sign languages. Other than that, we want to increase the accuracy and the speed of sign language recognition of our app.

Built With

Share this project:

Updates