I am new to the USA as a student, and I observed some students in a few of my classes who were always accompanied by American Sign Language (ASL) interpreters and did their job very well in converting knowledge for these students. I wish I could interact to them more without both of us constantly needing interpreters. The challenge gets even worse during the COVID-19 pandemic. Many interpreters may be confined to their homes and as we have classes online, it might get difficult for the special students to get their classes interpreted in real time.
This project is a very beginner attempt by me for ASL-to-Text conversion. Note that I'm not trying to cut down on interpreters' jobs - just trying to help people who can't afford one!
What it does
The laptop webcam detects ASL hand gestures and converts them to letters.
How I built it
opencv to build the image capture, detection and classification features of this web app. The web app is built using
Challenges I ran into
I had absolutely no knowledge of
pytorch. This hackathon was my first experience with it. I have worked with
tensorflow before and have a fair understanding of deep learning.
I also came across this challenge very close to the deadline, and could not find time to work on this project until 2 days before the deadline (due to my new semester, interviews and other deadlines). But I was determined to at least participate in this hackathon, even if I don't make it to the finish line (or the winning parade :)
Accomplishments that I'm proud of
I built this project in just 2 days - that also included time for my coursework, interviews' preparation, PG&E power outages. This is the shortest time I have taken to build a working project of such a level (albeit not the best UI and lacking many features). The run was like actually attending an 48-hours in-person hackathon.
What I learned
PyTorch and OpenCV are my biggest takeaways after this project - I finally have more to explore in that area. I also learned that I could pull something off in such a short time - the reason why I named it QuickASL. Hopefully this project would also look good on my resume! :)
What's next for QuickASL
I wish to work further on:
- user authentication enabled for the web app
- proper ASL-to-Text conversion with intents, and vice versa
- possible ASL-to-Speech conversion, and vice versa
- export the project to a mobile app, so as to increase the user base
- export the project as an extension for browsers (for videos) and meeting apps like zoom, chime etc.
I believe all these features in a single app would make this app a suitable companion for a large number of people.