Inspiration
Education is a basic human right, yet millions of learners with visual and hearing impairments face daily challenges in accessing knowledge.
Most existing platforms are not designed with accessibility in mind, creating barriers instead of opportunities.
Equilearn was inspired by the belief that learning should be equal for everyone, regardless of ability.
We wanted to create a platform where technology empowers inclusion—leveraging AI, assistive tools, and accessible design to break down barriers and make education truly universal.
What it does
Equilearn is an inclusive learning platform built to support students with visual and hearing impairments.
It provides a set of assistive tools that make online education accessible, engaging, and barrier-free.
Key Features
- Text-to-Speech (TTS): Converts written content into natural-sounding speech for visually impaired learners.
- Speech-to-Text (STT): Generates real-time captions and transcripts for hearing-impaired learners.
- Accessible Video Lessons: Supports subtitles, transcripts, and planned sign language integration.
- Customizable Interface: High-contrast mode, font scaling, and keyboard navigation for ease of use.
- STEM Accessibility: Supports LaTeX rendering so that mathematical content like
[ E = mc^2 ]
can be made readable and accessible through screen readers. - Cross-Platform Access: Works on web (and planned mobile support) to ensure inclusivity everywhere.
## How we built it Frontend: Developed using React.js and Bootstrap, focusing on accessibility-first UI design with features like high-contrast themes, ARIA roles, and keyboard navigation.
Backend: Powered by Python (Flask), integrated with AI models (TensorFlow & OpenCV) for speech recognition, text-to-speech, and assistive vision tools.
Assistive Features:
Text-to-Speech (TTS) for visually impaired learners
Speech-to-Text (STT) for hearing-impaired learners
Video lessons with captions and potential sign-language translation
Customizable interfaces for personalized accessibility needs
Challenges we ran into
Ensuring real-time performance of speech recognition and text-to-speech features
Designing an interface that is equally accessible to both visually and hearing-impaired users
Handling multilingual accessibility, since assistive tools need to support different languages
Integrating AI models without making the system too heavy for everyday devices
Accomplishments that we're proud of
Built a working prototype of Equilearn, an inclusive platform designed for both visually and hearing-impaired learners.
- ✅ Successfully integrated Text-to-Speech (TTS) and Speech-to-Text (STT) features, enabling real-time accessibility.
- ✅ Designed an accessible UI/UX that follows WCAG and ARIA standards, ensuring better usability for all learners.
- ✅ Implemented LaTeX rendering for STEM subjects, making complex math like
[ \int_0^\infty e^{-x^2} dx = \frac{\sqrt{\pi}}{2} ]
accessible through screen readers. - ✅ Learned how to combine AI, assistive technologies, and education into one platform that prioritizes inclusion.
- ✅ Overcame challenges in optimizing performance so the platform works smoothly across different devices.
## What we learned The importance of universal design principles in software development
How to integrate AI-powered assistive technologies into real-world applications
Best practices for accessible UI/UX, such as WCAG guidelines and ARIA standards
Collaboration between education and technology to promote inclusivity
What's next for Equilearn
We aim to:
Expand to mobile platforms for broader accessibility
Add AI-driven sign language recognition and generation
Partner with schools, NGOs, and accessibility advocates to reach more learners
Develop adaptive learning modules that personalize content for different impairments
Log in or sign up for Devpost to join the conversation.