Inspiration
Today, My grandfather struggles with hearing loss caused by his old age and loud working environments in his workspace when he was younger. Creating a project that can make communication far easier would help me keep the strong connection that I have built with him in the last 17 years.
What it does
In our two feature application, we use augmented reality face detection and voice recognition to display text above a hearing person's face. Additionally, the text projected on top of the person's face takes into account emotion giving back red for angry, yellow for happy, blue for sad, purple for fearful, disgusted for green, surprised for orange, and neutral for white. In our second feature, we utilize hand signs as opposed to text. Deaf people speak, think, and process in sign language. Having the option to assist the deaf person through sign language
How we built it
TinyFaceDetector was used for detecting faces in the video feed. FaceLandmark68Net is the model that detects key facial landmarks on the detected face. FaceRecognitionNet is mainly used for detecting the face box. FaceExpressionNet analyzes facial expressions and identifies emotions (happy, sad, angry, surprised, etc.). We also utilized a sign language dataset from Kaggle for the pictures used in our second feature. As well as both of these, we used the built-in microphone in the browser to detect the text for extraction.
Challenges we ran into
We were using an outdated module which limited us from using the app on our phones. We spent a lot of time figuring out why it wasn't working and realized this was the problem.
Accomplishments that we're proud of
We are proud of adding multiple features to this code instead of keeping the very basic version of the app that we had early in the design process. Instead of stopping there and considering our project as done, we worked through improving the program.
What we learned
We learned how to implement augmented reality into this program something we have never done in our previous javascript projects.
What's next for Vocalize
We would love to implement this code into real augmented reality headsets like an Oculus. Our current version requires you to use your phone and point it at a person. This may negatively affect conscientious people.
Log in or sign up for Devpost to join the conversation.