Inspiration

Smart personal assistants such as Alexa, Google Assistant, and Siri support voice commands, searching, and device control, which made our lives much easier.

However, people with hearing/ speaking impairment are excluded/ have restricted access.

To tackle the digital divide, we would like to build an app that can "listen" to visual commands such as custom hand gesture or sign language, and control devices.

What it does

  • Web dashboard showing apps and system control for home appliances

  • Turn on the camera and capture an image

  • Interpret custom hand gesture using machine learning models

  • Perform action - turn lights on/ switch fan on

How we built it

  • UI design using Figma

  • Front-end development using React

  • Back-end development using Python and Google Cloud Platform

  • Hardware development using C++

  • Machine learning using TensorFlow

Challenges we ran into

We are an international team with members from different timezones, not all developers are awake at the same time. We tackled this with effective communication and planning.

Accomplishments that we're proud of

We built a functional prototype (in a short span of time)!

What we learned

We learnt a lot about front-end development using react, machine learning and computer vision

What's next for Proxaid

Real-time interpretation

  • Support real-time interpretation instead of taking photo/ video

Sign language support

  • Support communication with the app via sign language using computer vision and machine learning algorithms

Apps & plugins

  • Allow users to integrate other apps and plugins

Privacy

  • Add in notifications, alerts and restrictions to prevent the app watching users all the time

Device support

  • Support other devices such as smartphones and Google home
Share this project:

Updates