Inspiration

This project is to helps the deaf person to communicate with a person using tactile signs. This project mainly has two primary tasks machine learning part which helps to translate the image gestures into a corresponding word or sentence. Using TensorFlow lite we deploy this machine learning model on to to the Android mobile device.

What it does

This Project takes the human hand gesture signs and process them and gives its corresponding output character. So we are building an android application which takes the hand gestures from the mobile camera and run the prediction algorithm on the mobile edge device and shows the character.

How I built it

We have collected a few hand gesture signs for training the model. To build the model we have used the google cloud GPU so that we can have the advantage of the cloud infrastructure and used azure data storage to save all the images on cloud. After we have trained the model with a train set we exported the model to TensorFlow lite. TensorFlow Lite is a new framework released by google to run deep learning models on IoT edge devices. We load this model in the android app to predict the hand gesture sign shown by the user to the mobile.

Challenges ran into

The TensorFlow compatibility issues. Collecting more data to train the model.

What we learned

There are multiple ways to develop and create a product despite obstacles on certain platforms causing a delay in product development. How to use the technology to help humans in a better way

Built With

Share this project:
×

Updates