All the products being developed today with AI technologies should be utilized by each one of us equally. The biggest motivation for this project was to give back something to society by using the provided resources and trending data science techniques most efficiently.
What it does
This project converts a sign language video to a text which is echoed by the speaker of the computer. This query is then sent to Alexa/Google Mini for a response. Then the response is converted back to a sign language video.
How we built it
We used the following techniques and products:
Challenges we ran into
1) Dataset for Sign Language 2) Training of huge images 3) The accuracy of the Model within a short span of time
Accomplishments that we're proud of
Building an end-to-end product for society.
What we learned
1) Working as a team 2) Handling Pressure 3) Have fun while working
What's next for Helping Hands
1) Live Video Streaming as an Input 2) Training on a larger dataset to improve accuracy