Our ideas floated around searching for problems that the pandemic brought up, and we eventually settled on a video chat Service using WebRTC. We additionally were curious about implementing some form of Machine Learning, and so we did a deep dive into Azure's Cognitive Speech Services.

What it does

Video chat run on Node.js using WebRTC and Azure Cognitive Speech Services to translate speech to target language.

How to use it:

Visit our domain at and select a room number for you and your partner to utilize.

Once in your private room, select your source language (the language you'll be speaking) and your target language (the language you want your words to be translated to).

Make sure to confirm that your selected languages are correct as you won't be able to change them unless you leave the room and join another.


  • We learned so much about EVERYTHING
  • We honestly had a great time throughout all 24 hours of the event

Challenges we ran into

Working with Node.js and the Azure Cognitive Speech Services infrastructure were both fairly new concepts for the majority of us. Taking a deep dive into Azure Cognitive Services gave us a lot of insight and inspired us to utilize more of thier tools in the future.

What we learned

We all got an in-depth scoop of:

  • Node.js
  • WebRTC
  • Azure Cognitive Speech Services

What's next for Language Liquidator

  • Make Front-End Look Pretty
  • Integrate a text to speech component
  • Ability to host separate rooms
Share this project: