Inspiration
There are over 480million deaf people as of 2021, and a study from WHO (World Health Organisation) states 1 in 3 people over the age of 65 suffers some degree of hearing loss. And yet, in a globalized world, we have not seen an effective solution for fluent communication between the deaf and the non-deaf. Like how translators have united people from all over the globe, we found the need to create a bridge between the deaf to the rest of the world for more convenient and efficient communication.
What it does
HGR is an application that allows its user to do sign language in front of an interface with a camera which then detects sign language and translates the sign language into a sentence or a word.
How we built it
We started off by identifying the lack of medium for translation from sign language to English. Our goal is to help the deaf population to communicate, but also to help those trying to learn sign language for the first time whatever their reasons may be. Thus, to aid this process we came up with the HGR application. We used the Google Cloud API specifically, the AutoML API to train TensorFlow models to be able to detect the ASL alphabet letters, and used the python and Kivy module to code the mobile app interface.
Challenges we ran into
As we had no prior experience with mobile app design or the Google Cloud API, learning the different aspects of app design and trying to put it to work was one of our big challenges. Alongside the absence of Xcode (a mobile application design API compatible with python) due to IDE, it was very difficult to bring our interface ideas into reality. Time constraints also placed a limit on how much training data we could upload to the Google Cloud Storage and also how long we had to train our TensorFlow model; furthermore, lexical gestures being dynamic forced us to start off with the static alphabet gestures to see our first results.
Accomplishments that we're proud of
On programming, we are proud to have trained the AI to be able to recognize the orthographic sign language. And developing a mobile application from scratch from the first time.
What we learned
Lots about mobile app development and deep-learning network AI specifically TensorFlow.
What's next for Tell me!
We believe we've created great bodywork for potential innovations. And we would love to create models that can detect lexical sign language from different regions that can operate to a similar success rate as Google Translate for any other languages. To detect lexical sign languages, we would need to extend our classification model to support being trained on (1-2 second) clips of gestures as well as receiving video feed instead of images.
Built With
- automl
- google-cloud
- kivy
- python

Log in or sign up for Devpost to join the conversation.