We wanted to focus on the theme of connectivity and we felt language, or rather a difference in language, was cause for a large barrier towards connectivity. However while apps such as google translate has enabled individuals to translate various spoken languages, it does not address the needs of those who are unable to speak. Our goal was to design a product which would allow for signed character to be interpreted and then outputted allowing translation between sign language and English.

What it does

User proceeds to sign a character in the English alphabet in from of a camera, the letter the user signed is then displayed on a oled module for users to observe.

How I built it

Using an ESP32-CAM connected to a local network we begin to stream a live video. A laptop connected to the local network then pulls still images in 5 second interval burst using a python program. The python program feeds these images into a convolutional neural network which re-formats and interprets to image as a letter within the English alphabet. This value is then passed to an Arduino Uno through serial communication where the letter is displayed on a O-led display module.

Challenges I ran into

The majority of the challenges lied on the hardware end. There were a lot of communication issues we ran into where interfacing with certain modules was much more difficult and buggy than originally thought.

Accomplishments that I'm proud of

We were able to configure hardware components that we had very little experience with beforehand. We were also able to troubleshoot through many issues such as capturing and saving the image from the EPS32 CAM, or the inaccuracy of the image analyzer and receiving the image on the OLED display. We were also able to quickly pivot to to different approach when we realized a current solution was infeasible. This ability to quickly adapt to incoming problems allowed us to always be working towards functional components without our product.

What I learned

Throughout the project we learned how to publish and pull from an http server connected into a local network. We were also able successfully learn how to integrate multiple electronic modules and microprocessors together to preform our desired tasks.

What's next for Signify

The next stage for signifty is to move from a serial communication between the laptop and the arduino to a bluetooth connection or use a different microprocessor such as the ESP32 dev board and communicate through wifi. Additionally, to further achieve our original goal we would implement a speaker module which would be able to play out an audio file of the signed words.

Built With

Share this project: