Learning a new language in elementary school to high school is hard. It becomes irritating and you don't end up knowing what anything means and how to hold a conversation. We wanted to build a project that helps you better understand a new language when reading it using visual cues.

Another reason we wanted to create this project was visual sequencing. People with visual sequencing disabilities have a hard time keeping track of what they are reading. They often end up skipping words or lines altogether. A common solution to the problem is actually making the text colour-coded and using visual cues.

What it does

Our web app allows users to visualize words in a language that they are learning to help them expand their vocabulary and gain a better understanding of the language and what they're reading.

How we built it

We used the Google Cloud Speech-to-Text API to listen to users' voices and transcribe it to test. We applied a Brill Part-of-Speech tagger to the voice input generated by the Google Cloud Speech-to-text API. The input is passed through a sentence tokenizer, and then the tagger classifies all the "content" words. Using a google custom search, we retrieved images for each noun. We then use the tagging to colour code text on the frontend and apply images as visual cues using react and dynamic rendering.

Challenges we ran into

  • connecting the front and back end
  • getting speech to text working in real-time on web

Accomplishments that we're proud of

  • designing a nice front end UI
  • getting the speech to text API to work in real-time in web and send the data to the front end

What we learned

  • connecting the front and back end
  • getting speech to text working in real-time on web

What's next for Cue

  • implementing auto-pause so the user does not have to manually press buttons
Share this project: