The AudLearn team was inspired by the fact that the hard-of-hearing persons were being left behind in employment because of the demand for digital skills.
We realized that the online learning platforms out there were not making this easier because especially through videos. Most LMS just have the video and the text translation of the video, not putting into consideration something some hard-of-hearing persons could relate with, which we believed was the sign language interpretation.
Further research proved our hypothesis right. This birthed AudLearn.
What it does
AudLearn is an online learning platform specifically for hard-of-hearing individuals to upskill themselves and prepare them for the unending opportunities in this fast-changing world. We want to bridge that employment gap between the hearing and the hard-of-hearing since digital skills have been one of the major criteria to land ad sustain jobs.
With AudLearn, the hard-of-hearing could learn digital and Tech skills comfortably through any means of communication whether Sign language, transcription, or lip reading. We made all these features available to accommodate different hard-of-hearing individuals since their mode of communication varies.
How we built it
Currently, we have a prototype that shows how Audlearn works. No line of code has been inputted to make it a programmed app or website. The prototype was designed on Figma.
The designers researched how to design a Learning Management System and discovered that there will be designs for three kinds of users: the learners, instructors, and administrators. Wireframes were made for each of these sections and they went further to agree on a design style and finished with the Hi-Fi.
How it works
Instructors would register on AudLearn and upload their courses. However, these courses won’t be uploaded immediately. The AudLearn team will take these videos and have a sign language interpreter interpret the course. Then the two videos will be uploaded together so the students would view them side by side.
There will be an API that would highlight the transcripted text as the speaker speaks so the students would follow.
Challenges we ran into
We ran into the challenge of getting a sign language interpreter to interpret a course video that we would use in our prototype. We also ran into the challenge of syncing the sign language video to the main course video because we noticed that while interpreting in sign language, it will naturally be slower than the main video.
Also, there was a big challenge of including the hearing individuals too. We wanted to target both the hearing and hard-of-hearing but at some point, we understood that trying to target both of them would negatively impact the user experience of one of them. Hence, in our MVP, we focused on the hard-of-hearing.
However, we hope to add the hearing individuals to our target audience with time. We believe we would get lots of insights during the testing period of our product.
Lastly, we had a problem of playing the video course during the prototyping. Videos can't play in prototypes so we had no other option than to record the video course playing separately. This also affected the appearance of the text transcription in the video record, though, the transcription appeared in the prototype.
Accomplishments that we're proud of
This hackathon project seemed like a big one at first and we are proud to eventually submit it even though we joined 15 days to the deadline and started 13 days to the deadline due to the non-dedication by a few former teammates.
The hackathon challenge made us understand how the hard-of-hearing are actually struggling in a world where no one considers them while building products. This revelation is something to be proud of.
Lastly, we are proud of the cooperation from the teammates. Usually, there's this expected disagreement associated with a gathering of ladies but we are grateful to and proud of one another for diligently contributing their skills to make this work.
What we learned
We learned of how the hard-of-hearing are struggling to make ends meet just because life isn't fair to them. We learned that with a team, we can do an impossible thing. One last thing we learned is the ability to be empathetic. If we didn't understand what it felt like to be left out of the job system because of a disability that one didn't pray for, we wouldn't have thought of this solution.
What's next for AudLearn
First, we adopted the lean system to understand how the users feel while using AudLearn. This is one reason we didn't code it. We will test this prototype with different hard-of-hearing individuals and check for any modifications that need to be made before we code it and launch it. We want to save time and resources.
There are plans to add mentors to AudLearn to guide learners in their career and learning process. We will also add a marketplace feature to bring hiring managers to AudLearn to give these disadvantaged individuals the opportunity to be hired. We have plans to prepare a roadmap during the testing period.
Lastly, we plan to widen our target audience by including hearing individuals. User research during the testing period will help us identify some extra loopholes in the online learning system. This would give us insights on what to include in AudLearn to make it worthwhile for the hearing individuals to come onboards.
The testing period will be an opportunity to find out what features to be added to AudLearn to solve this single problem: bridging the employment gap between the hearing and the hard-of-hearing individuals.
To launch AudLearn, an online seminar would be held to inform the hard-of-hearing on reasons they should continuously upskill themselves, then we will introduce AudLearn to them to solve their problem of not learning comfortably on other platforms.
AudLearn is scalable. With constant customer and user research even after launch, we plan to introduce features that would solve underlying problems and improve the user experience.
Log in or sign up for Devpost to join the conversation.