Our society is becoming increasingly more dependent on technology, and thus, the use and accessibility of commonly used devices for individuals who have visual or auditory impairment becomes increasingly important.
Our goal is to develop a solution that allows this minority group to communicate more conveniently, helping them fully participate and engage in society. We were driven by the goal of positivity impacting visually and audio impaired users to provide a user friendly and accessible product.
What it does
We came up with a solution called Visual Speech. Our app allows visually impaired users to connect to hearing-impaired users. Voice messages sent by visually impaired users are received as text messages by hearing-impaired users, allowing for seamless communication.
How we built it
We used the google cloud text api and text to speech api for the main functionality of our project. We used express.js and node.js for the backend, as well as firebase to store the chat messages. The frontend was built with React.
Challenges we ran into
We faced some technical and team challenges. Difficulties included setting up an environment variable, Git conflicts when merging, visual bugs, playing the audio for the text to speech API, timezone and communication issues.
Accomplishments that we're proud of
For this project, we’re proud of being able to utilize Google Cloud solutions via speech to text and text to speech APIs, albeit with roadblocks along the way.
What we learned
We came together to discuss ideas, and were able to apply both concepts that we already knew and build on top of unfamiliar skills like Node.js, React, Git, and more!
What's next for Visual Speech
We’re really proud of what we’ve accomplished at this hackathon and in the future, we hope to add live two way communication features, with authentication. We would love to move this app onto react-native so that it can be an app that people can download on the app store.