The driving force behind our hack was inclusivity and accessibility. We wanted to make it easier for deaf individuals to communicate with able bodied individuals remotely.

Our project allows a deaf individual to use his or her webcam to record themselves producing sign language. This is then spoken over the phone to the able bodied individual. The able bodied individual then responds by speaking, after which his or her speech is converted to sign and displayed to the deaf individual. This process then repeats itself until the conversation is complete.

The front-end is built using reactjs. The sign language translation is done using a convolutional neural network using google cloud platform and all of the voice to text and text to sign is handled by the Nexmo api and IBM Watson.

One of the main challenges that we ran in to was initially coming up with an idea. We were also rather indecisive on this front which meant we changed our idea a lot. This inevitably ate into valuable development time. We also had problems with Nexmo and IBM Watson working asynchronously.

We feel we have made a step towards making the cumbersome task of communication between deaf and non-deaf people easier. We are proud of the way in which our team operated along with the incredibly high degree of precision with which we can translate sign language. This is due to our incredibly large training set we supplied to the CNN.

We learned how to use Nexmo api to convert voice to text and vice versa. We also learned a lot about sign language and through planning the project, we realised how unaccessible current solutions are in this field.

In the future we would like to move on to translating other forms of sign language which may involve translating full body movements. We would also like to explore the idea of a mobile app.

Share this project:

Updates