Track: ML + Accessibility

Inspiration

In this world, learning is like a beautiful garden where each type of flower contributes to its beauty – some are colorful, some are fragrant while others are delicate and graceful making the whole garden beautiful. Just like how flowers add to the garden's beauty, every person's learning journey adds something special to our world whether it's a new idea or a way of seeing the world. However, for some people who can't see, hear, or speak well, this garden isn't as welcoming. Many people with disability don’t attend or have dropped out of school. It seems that the educational landscape doesn’t provide the necessary support and inclusivity for these students. This lack of accommodation limits their ability to thrive in educational settings, prompting unsettling questions about the fairness of such treatment.

What it does

It aims to address diverse learning needs by offering quality educational courses, test sets, quizzes, and job opportunities tailored to individual requirements. It also integrates sign language comprehension to enhance communication skills for the hearing impaired, ensuring inclusivity. For visually impaired students, accessibility is improved through seamless PDF narration, providing audio versions of essential documents. Moreover, communication is facilitated through speech-to-text and text-to-speech conversion features, empowering visually impaired individuals to engage in textual interactions and make notes effortlessly. This platform prioritizes accessibility and inclusivity, fostering an environment where all learners can thrive and succeed.

How we built it

The frontend was built using HTML, CSS & Javascript. Then it is integrated to database 'sql' using php. For Sign language recognition, we developed an AI model for sign language detection. It began with gathering a large dataset of sign language gestures captured in video sequences using a camera. We then used the MediaPipe library to preprocess the video data, extracting hand landmarks in each frame. Using Keras, we built a deep learning model with LSTM and Dense layers, optimized it with the Adam algorithm, and trained it on the collected and augmented dataset through multiple epochs to improve its classification accuracy for sign language gestures. Similarly, Speech recognition is built using pypdf2 and azure services & other python libraries.

Challenges we ran into

Ensuring the quality and diversity of the dataset required huge effort. The need to make the system user-friendly for both students and educators added complexity to the development process.

Accomplishments that we're proud of

We are proud of the successful development and implementation of this platform. This system has the potential to significantly enhance the learning experience for specially abled students.

What we learned

We realized the importance of high-quality and diverse data for training our sign and speech recognition. Additionally, we gained expertise in deep learning techniques like LSTM networks for processing sequential data. User-friendly design was also a key lesson.

What's next for Abhyuday

  1. As we move forward, we aim to expand the system's capacity to accommodate additional signs, dialects, and even different sign languages.

  2. We also aim to integrate real-time feedback mechanisms, enhancing the learning experience and promoting continuous improvement.

  3. Moreover, cross-cultural adaptation and collaboration with educators will be a key focus, allowing us to fine-tune our system for various educational settings and enabling teachers to harness the full potential of this platform.

Share this project:

Updates