SDG Goal: 4 (Quality education) & 10 (reduced inequalities)

Inspiration

In this world, learning is like a beautiful garden where each type of flower contributes to its beauty – some are colorful, some are fragrant while others are delicate and graceful making the whole garden beautiful. Just like how flowers add to the garden's beauty, every person's learning journey adds something special to our world whether it's a new idea or a way of seeing the world. However, for some people who can't see, hear, or speak well, this garden isn't as welcoming. Many people with disability don’t attend or have dropped out of school. They don't have access to quality education. It seems that the educational landscape doesn’t provide the necessary support and inclusivity to reduce these inequalities. This lack of accommodation limits their ability to thrive in educational settings, prompting unsettling questions about the fairness of such treatment.

What it does

The following solution has the potential to solve SDG goal 4(Quality education) & 10(reduced inequalities). It aims to address diverse learning needs through a dynamic online learning platform by offering quality educational courses, test sets, quizzes, and job opportunities tailored to individual requirements. It also integrates sign language comprehension to enhance communication skills for the hearing impaired, ensuring inclusivity. For visually impaired students, accessibility is improved through seamless PDF narration, providing audio versions of essential documents. Moreover, communication is facilitated through speech-to-text and text-to-speech conversion features, empowering visually impaired individuals to engage in textual interactions and make notes effortlessly. This platform prioritizes accessibility and inclusivity, fostering an environment where all learners can thrive and succeed, ultimately reducing inequalities in educational field.

How we built it

The frontend was built using HTML, CSS & Javascript. Then it is integrated to database 'sql' using php. For Sign language recognition, we developed an AI model which began with gathering a large dataset of sign language gestures captured in video sequences using a camera. We then used the MediaPipe library to preprocess the video data, extracting hand landmarks in each frame. Using Keras, we built a deep learning model with LSTM and Dense layers, optimized it with the Adam algorithm, and trained it on the collected and augmented dataset through multiple epochs to improve its classification accuracy for sign language gestures. Similarly, Speech recognition is built using pypdf2 and azure services & other python libraries.

Challenges we ran into

Ensuring the quality and diversity of the dataset required huge effort. The need to make the system user-friendly for both students and educators added complexity to the development process.

Accomplishments that we're proud of

We are proud of the successful development and implementation of this platform. This system has the potential to significantly enhance the learning experience for specially abled students.

What we learned

We realized the importance of high-quality and diverse data for training our sign and speech recognition. Additionally, we gained expertise in deep learning techniques like LSTM networks for processing sequential data. User-friendly design was also a key lesson.

What's next for Signify

 1. As we move forward, we aim to expand the system's capacity to accommodate additional signs, 
   dialects, and even different sign languages.

2. We also aim to integrate real-time feedback mechanisms, enhancing the learning experience and 
 promoting continuous improvement.

3. Moreover, cross-cultural adaptation and collaboration with educators will be a key focus, allowing us 
 to fine-tune our system for various educational settings and enabling teachers to harness the full 
  potential of this platform.

Updates

  1. How would we roll this project to the world and check if it is successful?

As the first step, our aim is to collaborate with schools, teachers, NGOs and other people in the field of special-education. This means working closely with educational institutions to employ the project within classroom settings, allowing students to integrate it into their learning experiences. For instance, deaf and mute students could utilize the platform's educational resources to learn sign alphabets, while assessing their proficiency through sign language recognition. Similarly, visually impaired students could use our platform to transcribe important notes using speech-to-text functionality. Additionally, our platform enables students to enjoy reading books by converting PDFs into audio format. This comprehensive approach ensures that specially abled students have access to diverse learning opportunities without any limitations. Furthermore, we will provide training sessions for teachers to ensure they can effectively utilize the platform in their teaching practices. Through these partnerships with educational institutions, we aim to reach a wide range of students who could benefit from our platform's features.

By engaging in these collaborative efforts, we also aim to gain valuable insights into the effectiveness of our platform and identify areas for improvement based on user feedback. Ultimately, our goal is to promote inclusivity and fairness in education, regardless of students' abilities.

Share this project:

Updates