It is hard to learn a new language. Many of us found this out the hard way using trial and error and then improving upon feedback and corrections. It is easy to receive corrections on written materials, but it is often hard to catch our mistakes when speaking a new language - especially grammar mistakes (English is also just terrible for new learners because its so ambiguous). This is where EasySpeech comes in. EasySpeech will run a grammar checker as you speak into the app, basically serving like Grammarly, but for speech.

What it does

Easy Speech converts your speech into text as you speak and sends it to LanguageTool API where your speech is checked for grammar and sentence structure mistakes. It runs with a NodeJS backend server and utilizes DialogFlow for speech to text. LanguageTool receives the text data and makes grammatical corrections and suggestions. The corrected text and grammar suggestions are returned to the user in DialogFlow.

How we built it

There were many components to building this application. First, the user interacts with the web server and has the opportunity to record their voice. This section was developed in React and JavaScript was used when handling logical operations to send the voice to Dialogflow and receive the adequate text back. Once this text was received, it was sent to the LanguageTool API using NodeJS where the corrections were returned and an algorithm used these to construct a new sentence. This would then be sent back to the user.

Challenges we ran into

  • Connection of Dialogflow to the front-end and the back-end. We had a problem/bug where only one user could log on to the local server at a time. Connection between Dialogflow and the server was also unstable.

  • Difficulty in understanding how to script the Dialogflow fulfillment with response from local server.

  • Working with the LanguageTool API with very little documentation.

Accomplishments that we're proud of

  • Successfully set up a running node server

  • Developed a React application with Dialogflow embedded

What we learned

  • Two team members used React for the first time to create the front end

  • A team member learned NodeJS to connect the LanguageTool API

  • All of us learned to work with Dialogflow for the first time

What's next for Easy Speech

  • Implementing more intent to create conversational interaction with the user. Where the conversational speech is also checked by the language tool

  • Output the result as audio

  • Directly connect response returned from the API to the front-end

  • Debug the connection between the Node server and Dialogflow

Built With

Share this project: