Inspiration
All the products being developed today with AI technologies should be utilized by each one of us equally. The biggest motivation for this project was to give back something to society by using the provided resources and trending data science techniques most efficiently.
What it does
This project converts a sign language video to a text which is echoed by the speaker of the computer. This query is then sent to Alexa/Google Mini for a response. Then the response is converted back to a sign language video.
How we built it
We used the following techniques and products:
1) Google Cloud Speech Recognition API - To convert Alexa/Google Mini speech to text and vice-versa. 2) Deep Learning Models(Convolution Neural Networks) - To train our model to interpret the video of sign language and convert it to text. 3) HTML/Javascript - Front End Development 4) Python/Tensorflow - Backend for backend development 5) Flask - Connecting Backend and Front End 6) Alexa/Google Mini - Quering 7) Github - Repository for our Code
Challenges we ran into
1) Dataset for Sign Language 2) Training of huge images 3) The accuracy of the Model within a short span of time
Accomplishments that we're proud of
Building an end-to-end product for society.
What we learned
1) Working as a team 2) Handling Pressure 3) Have fun while working
What's next for Helping Hands
1) Live Video Streaming as an Input 2) Training on a larger dataset to improve accuracy
Built With
- alexa
- deep-learning
- flask
- github
- google-cloud
- google-mini
- html
- javascript
- python
Log in or sign up for Devpost to join the conversation.