Often times we’ve seen people post live streams on social media, use video calling for a more personal touch to a conversation, and engage in activities which require extensive expression of emotion through language. However, with over 430 million people around the world having hearing disabilities and less than a fourth of people fluent in sign language, our team was sought to bridge this significant, yet ironically, unattended problem.

What it does is a web application that converts video input of sign language to three most popular languages English, Spanish, and French. The application also has in built capability of converting speech to text and also translate various languages to English - Text . In the future, ConnexionAI can be deployed in video chats and live streams to caption people with speaking disabilities to caption sign language and it could be used as a live ASL to text convertor.

How I built it

The machine learning model was trained on google cloud’s autoML platform, the web framework we’re using is flask, the website was built in HTML5, CSS, JS, and the project integration was in python.

Challenges I ran into

Our biggest challenge was to find a reliable dataset and deciding the hyper parameters for training the model under such a time constraint. The other challenge we ran into was integrating the video format in the web browser and taking inputs to backend and pushing outputs to the browser again.

Accomplishments that I'm proud of

We are proud of having the opportunity to solve a problem that affects the lives of millions of people in a fun learning environment!

What I learned

All of us worked with something that we had worked with before. Using google cloud for the first time was fascinating and the idea of connecting millions of dots really inspired us to pull through a working model.

What's next for Connexion

Future possibilities include implementing connexion to livestreams on Facebook and videos on YouTube to caption sign language for all audiences.

Share this project: