We are inspired by multiple pieces of media that showcased the disadvantages that individuals with hearing impairment face when communicating with other people. We sought out a way to remove these disparities and arrived at a translation app, which hasn't been implemented smoothly before this app.
What it does
The app overall allows the translation of sign language into text and speech, which makes communication faster. The app's most useful feature is its live video translation. The user can simply hold up the camera to what a person is signing in American Sign Language. Then, the sign language is translated to voice language and text language in real-time. The other person can respond vocally, to which a person with impaired hearing can use the app to receive a text translation and sign language translation. This process also allows or users to learn sign language, as the app allows for people to pick up on common sign language phrases and words.
How we built it
This app was coded with Xcode and Swift. Many of the key features of our app are made using Cocoapods libraries.
Challenges we ran into
As Xcode isn't software built for collaboration, parts of the app had to be built on different laptops. When we tried to combine our files, it did not work out, so we had to reprogram everything through the night. We also ran into many bugs and errors within Xcode but we are proud to have tried to deal with them to our greatest abilities.
Accomplishments that we're proud of
We are proud of the live translation capability of the app and the overall design of the interface, it's calling and welcoming to the user. We also were able to learn a lot about Xcode, and hope to use this knowledge going into the future.
What we learned
We learned how to use libraries when implementing the speech translator. We also learned to how deal with errors throughout the process. We now understand more about using shell scripting (terminal), connecting segues, and coding in Swift than we did before this hackathon. We learned how to use a lot of new libraries such as the VoiceOverlay (using Cocoapods) and the Camera library, which led us into many issues that we had to solve and mistakes we had to learn from.
What's next for silentVoice
We want to add more language, both signed and spoken, to make our app more accessible. We would also like to implement ASL languages lesson, so more people are able to communicate with others.
Log in or sign up for Devpost to join the conversation.