Digitized conversations have given the hearing impaired and other persons with disabilities the ability to better communicate with others despite the barriers in place as a result of their disabilities. Through our app, we hope to build towards a solution where we can extend the effects of technological development to aid those with hearing disabilities to communicate with others in real life.

What it does

Co:herent is (currently) a webapp which allows the hearing impaired to streamline their conversations with others by providing them sentence or phrase suggestions given the context of a conversation. We use Co:here's NLP text generation API in order to achieve this and in order to provide more accurate results we give the API context from the conversation as well as using prompt engineering in order to better tune the model. The other (non hearing impaired) person is able to communicate with the webapp naturally through speech-to-text inputs and text-to-speech functionalities are put in place in order to better facilitate the flow of the conversation.

How we built it

We built the entire app using Next.js with the Co:here API, React Speech Recognition API, and React Speech Kit API.

Challenges we ran into

  • Coming up with an idea
  • Learning Next.js as we go as this is all of our first time using it
  • Calling APIs are difficult without a backend through a server side rendered framework such as Next.js
  • Coordinating and designating tasks in order to be efficient and minimize code conflicts
  • .env and SSR compatibility issues

Accomplishments that we're proud of

Creating a fully functional app without cutting corners or deviating from the original plan despite various minor setbacks.

What we learned

We were able to learn a lot about Next.js as well as the various APIs through our first time using them.

What's next for Co:herent

  • Better tuning the NLP model to generate better/more personal responses as well as storing and maintaining more information on the user through user profiles and database integrations
  • Better tuning of the TTS, as well as giving users the choice to select from a menu of possible voices
  • Possibility of alternative forms of input for those who may be physically impaired (such as in cases of cerebral palsy)
  • Mobile support
  • Better UI

Built With

Share this project: