Inspiration

American Sign Language (ASL) is a language that is prevalent globally. Today, there are more than 70 Million Deaf individuals around the world and more than 70% of families do not sign with their deaf children.

This lifetime of language deprivation has the potential to severely impact the lives of Deaf and Hard-of-hearing individuals, everywhere. Such facets include cognitive delay, as well significant literacy deficits across ASL and written English languages.

At its root, this disparity in literacy breeds from a lack of access to ASL education and supplemental tools. Our team seeks to remedy this widespread issue by leveraging the power of Artificial Intelligence (AI), through our effective, inclusive, and accessible platform.

What it does

Serves as a platform to promote diversity and inclusion in AI tools that interpret ASL signs. While Natural Language Processing (NLP) is a tremendously popular discipline, very few resources exist in regards to NLP for non-written languages, such as ASL. Of those few instances that do exist, the severely limited databases can easily lead to biased AI ecosystems, which only one person [therefore, with only one referenced skin tone, style, lighting condition, etc.] is providing data.

HandHandRevolution is a first-of-its-kind open-source opportunity which allows contributors, or “citizen scientists,” from around the world to participate in our initiative to crowdsource diverse ASL sign data. This data is channeled from the frontend into the backend, where we have IBM Watson Studio. We leverage IBM Watson’s Visual Recognition tools to classify images.\

Further implementation includes gamifying the user experience, and partnering with universities that have specialized ASL degree/certification programs to improve this work, providing a more viable approach for scaling into a larger project in the future.

How we built it

Leveraged IBM Watson Studio to implement the Machine Learning (ML) algorithms needed to conduct image classification for different ASL signs. Created web-based user interface which captures images and processes them in the IBM Watson Studio backend. Our team has extensively to understand the existing market, the viability of further features that we would like to implement, financial projections, and scope of growth over time.

Challenges we ran into

IBM Watson Studio came with significant challenges in terms of reliability and created many logistical barriers throughout the weekend.

However, we collaborated with both IBM and non-IBM mentors to troubleshoot these issues and arrive at solutions, despite these challenges.

Accomplishments that we are proud of

Gaining a profound understanding of the complexity, capabilities, and implementation of IBM Watson Studio. Creating tech for social good, in an accessible format that allows ASL users from around the world to support open data and engage in “citizen science”

We are so incredibly proud of our team and our passion for creating AI that has the potential to change the lives of ASL users, everywhere. Also, we are just as thankful for the sponsors, mentors, and/or volunteers that have allowed this opportunity to exist.

What we learned

ML processes Frontend-backend interaction Node.js Image recognition/classification Teamwork makes the dream work!

What's next for HandHandRevolution

Automating the user media input to running IBM Watson Studio Larger dataset; incorporate entire alphabet, numbers, phrases Video input More diverse data subjects Incorporate different sign languages

Built With

Share this project:
×

Updates