Inspiration

We were inspired by the demo given to us of Google's ML Vision API in which a face was analyzed to determine emotion. Since as a team we were already focused on education, we decided to adapt this tool to the classroom.

What it does

Our web app takes continuous image data of students in class to analyze in Google's ML Vision. The algorithms send back data such as happiness and attentiveness. In addition, a text transcription of the lecture is taken in real time with relevant topics highlighted using Google's Speech API and Text Analytics.

How we built it

Our web app makes use of NodeJS, React, MongoDB, and Google Speech and Vision APIs

Challenges we ran into

Adapting available documentation to our personal use cases.

Accomplishments that we're proud of

This was our first full-stack application that we were able to create from the ground up with a database, server, and front-end.

What we learned

We learned how to properly make use of MongoDB as well as Google's powerful APIs utilizing machine learning.

Share this project:
×

Updates