Team Pi : UNMUTE YOURSELF

ABOUT THE PROJECT

So our project deals with empowering those special people who are unable to speak and hear, who aren't able to express their thoughts to the people surrounding them. They usually feel left out which leads to depression and loneliness since no one is their to listen to them. Here we Team Pi present you a (hardware + software) prototype i.e. UNMUTE YOURSELF which is giving voice to the people who are in need.

PROBLEM

So around 466 million people worldwide are having disability of hearing and speaking loss where 34 million of them are children. People who are born deaf and mute face problems in professional life wherein they become a part of discrimination and biasness. The people who tend to apply for the jobs are not able to express themselves to the recruiter properly and there are cases wherein the recruiters tend to face some difficulty in hiring some certified translators too.

Solution

So we came up with a unique solution of empowering these special human beings in order to ease their life in their professional as well as personal life. We made a hardware as well as a software prototype to help them. The hardware prototype tends to give voice to the wearer whereas the software prototype tends to train the sign languages with the specific words.

Here the wearer is the person who isn't able to speak or facing hearing loss.

This can help the recruiters to get a perfect response from these special people and the deaf-mute person can easily tell what all actually is going on through his/her mind with the help of sign languages shown to the device.

How does it Work ?

  • Here first we built a website
  • Then we proceeded by deploying the KNN image Classifier using the tensorflow.js
  • K-Nearest Neighbours (k-NN) is a supervised machine learning algorithm i.e. it learns from a labelled training set by taking in the training data X along with it's labels y and learns to map the input X to it's desired output y
  • Then in order to convert the text to speech which tends to display after detecting the sign we used the google's text to speech conversion
  • For the output of speech we then made a hardware model comprising of two speakers connected to the ROYQUEEN X200 board via jumper wires.
  • We got the board from an old dismantled audio speaker
  • We also used the Mechanix framework for overall designing of the hardware
  • We connected our hardware device with an aux cable for time being but we also added a battery for more flexibility
  • And then Eureka! we got a perfect sign detection along with words .

Problems that we faced

So our goal was to make a device wherein we train the sign language via camera input from the device such that it'll be much more economical but due to lack of resources and time constraint we went for the website which came out to be pretty good and at times there was some lag in the translation as well as while training too.

LEARNING RESOURCES (What all we referred)

  1. Google Machine Learning
  2. KNN exercises using Colab
  3. Medium post By Paarth Bir
  4. Research Paper

FUTURE GOALS

So what future expects from Team Pi is that we will make use of much small and more durable speakers with a really comfortable design. We'll be using Posture recognition more much more accuracy in recognizing the gestures. We'll make our hardware much more economical. We also thought of adding a bluetooth connection to avoid the use of wires and give much flexibility to the wearer.

..We tried to make a difference and hope you all like it..

WE THOUGHT, WE LEARNED and WE DID !

Built With

Share this project:

Updates