Inspiration

Having been bombarded with constant emails demanding us to complete a SOLE, the (very, very long) lecture feedback survey, and seeing the “historically” low participation rate in the Chemistry Dept., we figured there had to be a more feasible way for lecturers to get feedback. Furthermore, the feedback is requested from students who have completed the lecture course and so the results have little to no effect on them.

What if there was an app for a lecturer which gives them quick feedback without bothering students that much? What if lecturers could implement the suggestions right away so that students could benefit immediately from giving feedback? What if lecturers who make good memes could continue making them and the ones who fail try a different approach?

How We Did It

We first started by completing the intelligence behind our platform. We integrated the Emotion API and this introduced the facial detection and emotional classification aspects. However, we noticed that in order to overcome technical challenges with AI (AI is notoriously bad at classifying confusion and other more intricate facial expressions), we would have to pool data from other areas. Our first idea was to simply use EEG headsets with an LSTM to classify emotional state, but this is very expensive and doesn't scale; as a result, we opted to take in raw feedback and use RASA NLU to understand it. We trained a model with 118 hand-picked samples and were able to classify several other important metrics.

We then continued onwards to make the framework for Feedbacker. We built a total of 5 different API/servers with endpoints for thorough data manipulation. These APIs are publicly available and free to use (as long as our budget permits), and we included several proof of concepts to show how they can be very easily integrated. We continued to build upwards, adding robust frameworks and opting for scalable solutions (e.g. instead of Flask, we used Sanic; Sanic's asynchronous capabilities far exceed Flask in stability). We kept our Custom Vision model for detecting applause on the cloud as we recognized serving a TensorFlow model would turn into a long ordeal.

Lastly, we decided to store some presentation data using MongoDB and a quick ExpressJS api (for speed this time). We then served this using a WebRTC application, in which one client hosts the slide deck (used for presenting, can be dragged to a new window or such) and another client on the same socket hosts the tracker. The tracker can both control the presentation (click, arrow keys, spacebar, all the standard navigation conveniences) and keep track of the emotional state of the crowd using ChartJS.

We then realized that a webcam was lacking as a good technology for recognition and we could gain more range and resolution with a phone camera. We built a React Native camera-based cross-platform mobile app that hosts the data, identifies people and tells the user the two top emotional states present in the picture. It also feeds this data through to an APi that stores this information in a Mongo collection, from which we can identify the presentation by name and provide further services (such as realtime analysis, post-mortem NLP feedback, etc).

Overall, we hope Feedbacker becomes the ultimate presentation tool, giving speakers a new, far-reaching insight into their audience.

Tech Used

  • NLP/NLU (RASA NLU as seen in skill-nlp/)

  • OpenCV for getting webcam data

  • React Native for Mobile app

  • Microsoft Cognitive Services - Emotion API for emotion classification

  • Microsoft Custom Vision for applause detection (crowd-only)

  • WebRTC (Socket.io) for creating the deck + tracker pairing where we can control the presentation

  • We originally used Vue but reformatted to use straight JS + HTML (with fetch api) to reduce latency for HTML pages

  • Bootstrap verson of "google forms" to prove our api is usable

  • Sanic framework with Python for async web servers

  • MongoDB for storage + MongoClient in Node.js

  • Express.JS for js apis

    • some more! If you'd like more information please come by and ask us, we'd be more than happy to go into the specifics of our tech.

What it does

How many presentations have you seen? How often were you confused over a point, or wanted to hear more, but did not want to interrupt the presenter’s flow? How many presentation surveys have you completed, even though your feedback will not benefit you?

Feedbacker solves this problem. It is an opportunity for good presenters to connect with their audience and adapt their talks when it really matters.

We used the Microsoft Cognitive Services Emotion API to recognise how the audience is feeling and provide real-time feedback to the presenter. Furthermore, our natural language processing feedback form provides statistics afterward to show an overview of main sentiments, as well as the individual comments and specific feedback.

Challenges

Google Slides doesn’t have an API so we designed a WebRTC alternative to slide control. In addition, we had to weave through several servers and wrap other APIs in our APIs to effectively manipulate data.

Accomplishments that we're proud of

We are very proud of the time it took to make this project considering our skills.

What's next for Feedbacker

The very next step would integrating the live emotion caption app with post-lecture feedback analysis feature.

At the moment, Microsoft Visual Emotion API can’t recognise more than 64 people at once but with some more time it would be possible to cut a picture with 100 people into sections so that everyone’s emotions are taken into account. However, this is not essential as the API would find different people on every snapshot.

Another step for Feedbacker would be integration with teaching software such as Panopto and Blackboard to aid in university lectures.

Built With

Share this project:
×

Updates