Watched the difficulty of non-technical translators explaining the core technical concept to deaf person
convert image to text
Used google auto ml api to train ML model with the images
getting data of hand gestures of sign language in classified manner
Will be helpful for disabled people person explain the technical concepts directly to the professor
how to train model on GCP , gathering of data as images.
What's next for Sign Language alphabets prediction
to live stream videos of sign language gestures and convert to text or speech at the same time.
Log in or sign up for Devpost to join the conversation.