Inspiration: We wanted profs to be able to know when the class were not focused or bored so they could adjust their class accordingly, which was not possible with the virtual classrooms.

What it does: With a trained CNN model (no time to train it), the videos captured during meetings would be used to detect the overall classroom mood and give the instant feedback to the prof with a graph.

How we built it: We are using tensorflow keras model layers to build our model. The data comes from the i-bug database. We split the videos into frames that are then resized for our model.

Challenges we ran into: Finding a good database. We built model for two different databases (reaching efficiencies of almost 70% for emotion recognition) before choosing the data we are using now.

Accomplishments that we're proud of:

First time building a neural network for the model developer (knew almost nothing of machine learning before-hand). We all learned python during the weekend. Quickly splitting videos in thousands of frames and storing them automatically, all in a language we didn't know.

What we learned: Learning all we have learned about emotion detection was very interesting. Acquired a lot of knowledge about GitHub. Finally, how to deal with big data sets.

What's next for MoodBoop.space: Trying to have a trained model that achieves high accuracy. Adapt our idea to existing video-conference applications. Sell our idea, make millions, rule the world!

Built With

Share this project:

Updates