Inspiration

Video chatting is crucial in order maintain relationships during these tough times of COVID-19. People often use Zoom or Discord to stay connected but these apps are not so easy to use by people with disabilities. The lack of inclusion for people with disabilities in modern day technologies inspired us to create Motion. We thought this was an interesting topic to target as it provided a fulfilling way to utilize the power machine learning.

What it does

This app provides a platform for people with disabilities to easily communicate with others over video and promotes education of sign language to a wider population. Each user can train machine learning models to interpret their sign language actions and then use these models to communicate with others via video call.

How we built it

We used React and Material UI to build out the front end. We trained Tensorflow.js models to classify different sign languages. The backend was built with Node.js, Express, MongoDB, and Firebase. The connection between users was created using WebRTC and socket.io.

Challenges we ran into

Connections between users, compatibility of webRTC libraries with our desired stack.

Accomplishments that we're proud of

Incorporated machine learning models to perform image detection, created a solid end-to-end product. Implemented a working product that helps the disability community to stay connected in the COVID times. Also learned a lot about sign language and how powerful and interesting it is.

What we learned

We learned Image detection with tensorflow.js and creating video chatting sessions with WebRTC.

What's next for motion

Continue to develop more features that would target a wider group of vulnerable demographic. Creating different applications to support them.

Share this project:

Updates