Professors merely get enough feedbacks from their students. They are eager to improve their teachings, but without feedbacks, it is nearly impossible. In this modern day, facial emotions and speech recognitions are at our fingertips. We are inspired to use these technologies to tackle this problem. This idea originally came from Prof. Scott Yabiku, in which I met at the symposium for teaching and learning with technology (TLT).

What it does

It takes pictures from your webcam every 5 seconds and recognizes emotions from faces it sees. It also transcribes what you say into text.

How we built it

We used React.js and Redux as our main framework. For the emotion recognition, we used Microsoft Azure's API. For speech recognition, we used webkitSpeechRecognition that is available in Chrome and Firefox.

Challenges we ran into

We had a really hard time getting the Microsoft Azure's API for speech recognition to work. But we found a gem buried in the new webkit features in Chrome and Firefox.

Accomplishments that we're proud of

We are so proud we made it happen as a team! Everyone contributed to the final product.

What we learned

We learnt that somethings are not meant to do in the browser but in the backend, such as require('fs') for Microsoft Azure's speech recognition API.

What's next for Gauge

We want to add a final summary of results but we didn't have enough time. We believe this product of many other possible applications such as in business sales and mental health doctor remote visits.


Please try out our demo using either Chrome or Firefox.

Share this project: