We wanted to find a way to translate English speech to ASL by integrating machine learning and speech recognition to help us better communicate with individuals who are deaf or hard of hearing, or who use ASL for any other reason. Since we are at LingHacks, we wanted to incorporate linguistics in our project, leading us to come up with this project idea.
What it does
It uses speech recognition to translate spoken or written text into ASL images.
How we built it
For the backend we used python. The libraries and modules we used are speech_recognition, pyAudio, and cv2. The ASL translations are images taken from the Lifeprint's American Sign Language guide and also Boston University's data set of words in sign language.
Challenges we ran into
Our team is comprised of relatively or completely inexperienced programmers so we had to conduct a lot of research to write even the simplest code or to make things work. Two out of four of our team members have never attended a hackathon before this one. Additionally, we were unable to get the speech recognition to work even though we are using an already trained speech recognition program. We also had great trouble implementing our website, and it took us a long time and a lot of unnecessary effort in order to create it––we coded a lot of things that never ended up being able to be used in the final website. In addition, it was difficult for us to even get a domain due to our initial lack of payment.
Accomplishments that we're proud of
We were able to get as far as we got with the knowledge that we had, creating something that we never thought was possible with our own hands.
What we learned
As a team, we improved our basic understanding of Python, furthered our knowledge in machine learning, and learned how to implement speech recognition. Of course, we also learned the value of teamwork, as no project on this scale could be easily completed by a single person. We also learned how to use HTML and CSS to effectively build a website.
What's next for SpeechToASL
We hope to gather more data to train the speech recognition program and also to expand our dictionary for ASL. We also want to be able to translate ASL to speech by using computer vision to recognize hand positions and movements.