WHAT IS VOICY:
There are roughly 360 billion people in this world that are hard of hearing and might need to use the sign language to communicate. However, this creates a barrier because most people do not understand sign language. Here is where Voicy comes in!
Voicy is an app that uses a camera to detect the motion of sign language by analyzing the positions of the human joints and translate it into audio.
HOW WE CAME UP WITH THE IDEA:
The wrnchAI tutorial was the foundation we used to come up with a useful project using the wrnchAPI. We were brainstorming about where does the information of coordinates of body joints can be used?
Not many people know sign language and an app that can interpret data of hand signs into words or speech can solve this problem. Thus, Voicy.
We thought that we can use wrnchAPI to get the joint information of hands to identify the hand signs being made. We then tried to write python code and came up with algorithms to identify simple hand sign images into words like "Hi" or "Stop" or "No".
We ran into several problems as in downloading the backend and frontend developing tools and using the API. But we braved all the challenges to come up with a more or less working model. We hope that we can work more on it and expand it to make it functional and to help people with special needs.
Built With
- javascript
- python
- wrnchapi
Log in or sign up for Devpost to join the conversation.