You walk up to the front of the room. The silence is deafening. You feel your mouth go dry. “Do I look confident? No, probably not. Wow, that's a lot of people. What if I stutter or forget what to say? Oh God, can they see my hands shaking?” You begin to recite the lines you practiced for the first time earlier this morning. After the first few sentences you begin to get into the zone, right at the moment your teacher pipes up: “Hey we can’t hear you from the back, can you start again?” Man don’t presentations suck?
Only if you’re unprepared.
Oral communication skills are crucial for success in all domains of life, yet incredibly few people are willing to practice them. There are so many intricacies to an outstanding oration that it gets hard to make sure you’re checking all the boxes.
You’ve been told to practice in front of friends and family, but they don’t have all the time in the world. You’ve been told to practice in front of the mirror, but that doesn’t give you much reliable feedback. Going out of your way to practice delivering effective presentations seems like a hassle, and this leads many presenters to fail at inspiring their audience.
Practice makes perfect, so how can we give presenters a way to receive instant, actionable feedback on their presentations?
What it does
It provides in-depth analysis for a person’s speech/pitch/presentation and personalized suggestions to improve and enhance communication skills.
How we built it
OpenCV for video capture functionality Tensorflow and Keras for the machine learning model Flask for the Backend Server
Challenges we ran into
Setting up our virtual environments was a huge challenge. We encountered illegal hardware instruction and Nvidia GStreamer errors. Getting the Machine Learning model to work on our website actively was also a big obstacle.
What we learned
Good coding practices like File Structures and System Design Useful Technologies like Jinja and Flask Deploying a Machine Learning model with a Frontend & Backend
What's next for Prepresent
Modular Approach - with the video input server up and running, we simply need to apply additional models and display the results on the output overview