Due to this pandemic, everyone, especially adults, has been forced to work in a virtual setting through platforms such as Zoom, Skype, Webex, and other conferencing platforms. Those who are hearing impaired, unfortunately, have a larger barrier when it comes to communicating on these virtual platforms.
Statistics have shown that adults with a hearing impairment cause the household income to decrease on average by $12k. There are hearing aids for this, however, 3.65 million hearing aids in 2016 alone were thrown out, as many people do not want to wear it.
Based on the information we gathered, we realized that we wanted to create a web application that would allow people who have difficulty hearing to have trouble-free communication.
What it does
bridge is a video calling web application that converts ASL to speech and speech to text, allowing for trouble-free communication between people who are hearing impaired and others.
bridge flaunts flawless communication through the utilization of advanced technology. Bridge will view and read ASL hand gestures through the camera. Then, our web application will work to recognize these hand gestures as it will use ML and Neural Networking to process this information from the camera. After, the application will convert the AMS hand gestures to text and then to speech.
How we built it
For the Speech to ASL and ASL to Speech, we used a python source code (Signum) that already had a basic ASL recognition scanner. We altered the code by removing their second and third guess feature. To add on to the code, we made it a multi-user video chatting call and had closed captions. We used the Google's text-to-speech library (gTTS) to convert the text into spoken words. We also used Keras, a Machine learning library that recognizes the ASL alphabet and has a trained neural network with a database of ASL gesture images.
The website was coded through React Native, and to create a prototype and UI design of how we envisioned bridge to look like, we utilized Figma.
Challenges we ran into
One of our main challenges was beta testing. Because the target market of our app is primarily people who have a hearing impairment, we were hoping to showcase and demo bridge to people with a hearing impairment and get their feedback and advice on how we could improve our product and make it more accessible. However, because of the short time span of this project and COVID-19 concerns, we were unable to physically have people with hearing impairments test our idea and provide feedback.
Another big obstacle that our team faced was regarding the code. We found source code that recognized ASL and deciphered the letter from Github, but after running this code, we found numerous errors that kept the code from identifying the ASL to letter accurately. Our team had to spend a lot of time initially understanding the code, and then trying to identify the bugs, and then try to fix them. Our team relied heavily on online resources like Stack Overflow to identify and fix the errors, but this process took a large amount of time.
Accomplishments that we're proud of
One of our major accomplishments for this project includes coding a fully functional website to complement our idea. We used React Native to build this website but most of our team members did not have experience with React Native or how to code with it. Our team learned a lot about utilizing React and we’re proud of the website we made with it, and we’re looking forward to using React Native in future projects.
Another accomplishment was the prototype that we made on Figma. Before hACCESS, no one on our team had used Figma to build website UI designs, so we were all quite new to Figma and all of its various features. Along the way, we learned a lot about how to use Figma and create UI designs that are both informative and aesthetically pleasing, and this experience exposed us to a new prototyping platform that we will surely use in the future!
We’re also proud of the fact that we were able to conduct a survey and receive 50 responses, most of which supported the problem we are trying to solve.
What we learned
Throughout this process, our team was exposed to how so many people face widely different problems relating to accessibility, especially how these problems are often ignored by large companies and corporations. The experience opened our eyes to the fact that there is still a lot of work to be done to help people with disabilities and impairments access specific information and resources, and doing extensive research on this topic has inspired our team to continue working on projects that help with accessibility.
This project also helped our team learn a lot about using Figma and other prototyping platforms, as well as coding websites using React Native. Our team was not very experienced with using this platform, but throughout this project we learned a lot about how to make use of the various features on Figma and React Native.
What's next for bridge
In the next couple of weeks, we want to continue developing bridge such that it is accurate and efficient at recognizing and making sense of ASL. Currently, the code only recognizes letters, but we want to further develop it such that it can recognize ASL for entire words, as this feature will likely be much more useful. We also want to transition from creating a website to creating a browser extension such that bridge will be easy to use.
We also want to conduct a more thorough market analysis and hope to speak to at least 50 people with hearing impairments such that we can accurately gauge what features we should include in bridge. Once quarantine restrictions begin to ease, we would like to formally conduct beta testing to analyze how we can make bridge more accessible and easy to use, especially for those with hearing impairments.