Inspiration

These years, one of the disruptive technologies that came very strongly to the market is the personal devices. They are an excellent tool to help during day to day life, as well as in professional environments.

It is essential to start thinking about all of those people who have some disability to make these tools more accessible. We believe that all of these technologies are going to make a huge impact. Social robotics is coming to our present.

During this weekend we have decided to focus on one of the most common categories of disabled people, the deaf-mute. Can you imagine them trying to interact with Alexa?

Our main idea was to create a small prototype (as we only had 32h...) facilitating this interaction. We have researched the English sign alphabet and what our software would have to do is to translate each letter..

What it does

Right now what it does is find the hand position and save its coordinates.

We also have an algorithm to get the coordinates in the format that we need for further processing.

We are still working on knn algorithm.

How I built it

To do this we have used a camera (simulating the camera that most personal devices have nowadays) and we have been trying to train a database using the rnn-model that BlackRock has provided to us.

For this we have been working with:

  • open pose
  • knn
  • Json
  • Python and c#
  • rnn

Challenges I ran into

  • Time
  • Knowledge: using tools and new technologies we have never worked with before

Accomplishments that I'm proud of

We had a lot of fun building it. All together we were an excellent team setting, and we have learned from each other (as we all have very different technical backgrounds).

What I learned

We have learned about image reconsecration and artificial intelligence. None of us had the opportunity of working with those technologies before.

Something else we have learned is working in a very international team and how to get the best of everyone to develop a good project

Another important skill we have improved during these past hours is how to work under pressure and have fun at the same time.

What's next for Sign Language Recognition

The main goal for the final product would be to get it to recognize sign language motions and be able to interpret and translate them in real time. The main steps to building this would be:

  • getting more data
  • improving our skills in neuronal network
  • training the database
  • testing the database
  • connecting the system with a raspberry pi that has AVS (amazon voice services) and a camera.

Built With

Share this project:
×

Updates