We are very passionate about giving everyone the ability to have their voices heard! Public speaking is a fear facing many of us, and because of this fear and the general lack of resources, many talented people do not have the chance to receive credit for their work or to make their opinions known.

We began by brainstorming the key painpoint facing speakers today, and arrived at the key issue of the lack of opportunities to present in front of real audience while feeling like a safe environment. Therefore, we are working to create a virtual environment, where virtual audience can react and give feedback to the speaker, so the speaker can practice speaking and improve while not feeling unsafe or worried. We created this environment on the NReal platform.

In generating the reaction in real time, we use natural language processing to extract sentiment from the user's spoken content (using Valence Aware Dictionary and Sentiment Reasoner). We also used neural networks to train a model on the speech audio file (using 1600+ video clips from RAVDESS dataset). Through the two-dimensional analysis, we are able to create a model that listens to the user's spoken content and generate a reaction accordingly, which then feeds into the reaction of our virtual audience.

Our team created audience reaction animations for common emotions such as engaged, happy, sad, surprised. We also added verbal feedback that correspond to each one of the emotions. So while the user wears the device and speaks, the user is able to see the virtual audience, who reacts real time.

We have also done work around improvement-focused feedback areas, such as speaking speed, tonal variety, volume adjustment. We have not had enough time to engineer the front end of these features yet, but these are areas that we are looking to work on going forward!

Share this project:

Updates