Inspiration

We wanted to build that application that would help hard-of-hearing people communicate without the need of an interpreter by showing ASL video translations in real time. We felt that this is a very important problem, as it will allow people who are hard of hearing to speak without the need of an interpreter, and thus, have a more personal connectoin with the people they are talking with.

What it does

The app is composed of three parts - a dictionary, speech translation, and live video chat. The dictionary takes in any word and shows the user a sign language representation of the word. This is extremely useful for deaf people who want to learn new English words, as well as others trying to learn ASL. Next, the speech translation takes in speech input from the user, and converts it into a video of the ASL translation. This mode is primarily useful for someone who wants to communicate with a hard-of-hearing person. Finally, we implemented a live video chat as well, which will dynamically send the second user ASL translation videos of what the first person is saying.

How we built it

We built our server using Node.js and Express, and we webscraped sign language videos using Python's Beautiful Soup. We also used EasyRTC to enable the video chat.

Challenges we ran into

We had trouble integrating EasyRTC into our application - it took us time to get the video chat to work. Additionally, we had a little trouble integrating Node.js and Python as well, as the Python subprocess's return value would not exist in the global scope. Eventually, though, we found a solution, and thus we were able to integrate webscraping into our project.

Accomplishments that we're proud of

Integrating EasyRTC was one of the most difficult parts of the project, so we're definitely proud of that. Also, we were glad that we got the Python and bash subprocesses set up.

What we learned

We learned many useful things about js, web development, and backend development

What's next for AllCommunication

In the future, we want to focus on using motion-detection software to detect the movements of a hard of hearing person, and then convert that into text.

Share this project:

Updates