Inspiration

430 million, this is the number of people who are currently deaf or suffer from some kind of hearing impairments. Deaf and hard of hearing (DHH) individuals encounter significant challenges when it comes to accessing education and communication. This issue leads to disparities in educational outcomes and limited opportunities. Even, the current landscape of educational resources only caters to the needs of hearing individuals, while erecting barriers for DHH students to gain quality education. DHH individuals often rely on sign language as their primary mode of communication. They frequently depend on sign language interpreters in educational settings whose availability is not always guaranteed in every educational context. This leaves a large number of DHH students at a disadvantage, underscoring the urgent need for innovative solutions that enhance accessibility and inclusivity in education for this unique demographic.

What it does

In order to address the pressing problem of improving the educational experience of the DHH community, our project delves into the fusion of artificial intelligence and computer vision to revolutionize the teaching of sign languages. Our innovative approach empowers these students with a platform that not only comprehends their distinct language but also educates, facilitates practice, and assesses their progress, all while providing immediate feedback. This approach makes the learning process more interactive and dynamic. The solution platform will be based on a basic principle: LEARN, PRACTICE AND TEST.

Through this project, we present an exciting prospect for the future of education to provide deaf and hard of hearing students with the resources they need to thrive academically and beyond.

How we built it

The whole project is divided into three sections:

  1. Learn
  2. Practice
  3. Test

Learn and Test were incorporated directly into the web app using HTML, CSS and Javascript. In ‘Learn’ section, different resources are uploaded through which students can learn. Similarly, using the same, for ‘Test’ quiz is prepared where students can test their skill and the score is stored in the database. For the “Practice" section, an AI model was developed in the following way: -a substantial dataset of sign language gestures was required. We collected this dataset by capturing video sequences of various sign language gestures using a camera.

  • The collected video data was preprocessed to extract relevant information. This involved using the MediaPipe library to detect and track hand landmarks in each frame of the video.
  • We used the Keras library to build a deep learning model for sign language detection. This model consisted of LSTM and Dense layers, which are well-suited for sequence data like sign language gestures.
    • The Adam optimization algorithm was used to train the model.
    • The model was trained using the collected and augmented dataset. It went through multiple epochs to improve its ability to classify sign language gestures correctly.

Then, all three sections were brought together using HTML, CSS, Javascript to provide a user-friendly interface on a single platform.

Challenges we ran into

During the development of the AI system for sign language detection, we encountered several challenges. Firstly, the process of data collection proved to be demanding. Ensuring the quality and diversity of the dataset required huge effort. The need to make the system user-friendly for both students and educators added complexity to the development process. To overcome these challenges, we adopted a multidisciplinary approach to create a comprehensive and effective solution.

Accomplishments that we're proud of

We are proud of the successful development and implementation of 'Signify - ASL learning platform'. This system has the potential to significantly enhance the learning experience for deaf and hard of hearing students. It not only accurately recognizes sign language gestures but also provides real-time feedback, making it a valuable tool for both students and educators. We are proud of our contribution to promoting inclusivity and accessibility in education through this innovative technology.

What we learned?

We realized the importance of high-quality and diverse data for training our AI system. Additionally, we gained expertise in deep learning techniques like LSTM networks for processing sequential data. User-friendly design was also a key lesson, as we aimed to create an accessible interface for students and educators.

What's next for Signify?

  1. As we move forward, we aim to expand the system's capacity to accommodate additional signs, dialects, and even different sign languages.

  2. We also aim to integrate real-time feedback mechanisms, enhancing the learning experience and promoting continuous improvement.

  3. Moreover, cross-cultural adaptation and collaboration with educators will be a key focus, allowing us to fine-tune our system for various educational settings and enabling teachers to harness the full potential of sign language instruction.

  4. We also aim to implement it into a mobile application.

Share this project:

Updates