Inspiration

Sign language is the communication medium for the deaf and the mute people. It uses hand gestures along with the facial expressions and the body language to convey the intended message. The main goal of this project is to have a conversation between blind and mute people.

What it does?

It converts sign language into speech using machine learning and image recognition. It also converts voice into text for communication between a blind and a mute person.

How we built it?

We used a Raspberry Pi to perform the task of seeing or listening and it will then perform the computation require and output it appropriately.

Challenges we ran into

We faced a lot of challenges, which includes raspberry pi does not support opencv python library, there was no sound card in the raspberry pi to detect any sound made by the user and the inability to install Tensor Flow on a Raspberry Pi.

Accomplishments that we're proud of

We are proud that we have created a medium between some people in this world. The deaf communities uses sign language as a communication medium. Sign language uses manual communication and the body language to convey the intended message unlike acoustically conveyed sound patterns. So we're happy that we built this.

What we learned?

Using hardware is a challenging task and to overcome those challenges, we learnt a lot of things, an most importantly not to give up even after 8 hours of debugging.

What's next for CrossCom?

We would like to extend this further by using a different method of approach i.e. using more datasets which can help increase the accuracy.

What's our table number?

Table 21. It is beside the "Refuel at Absorb".

Share this project:

Updates