Emotion, language, and our ability to comprehend and adapt under a social framework is pivotal for success and fulfillment as human beings. We live in a time where our physical and rhetorical impression on others greatly affects professional and personal outcomes.
We also live in a time that is more and more embedded with technology that we interact with on a personal, direct basis, and of which gives us immediate feedback in return. We decided to pitch an idea a few steps ahead into the realm of integrative computer vision and hack together a webapp environment that records a conversation you have, integrates various audio and visual input, and outputs a report on the progression of your emotional appearance and some feedback regarding the results.
We combined various APIs from different companies- emotion recognition from Microsoft, speech-to-text from Google, sentiment and emotion text analysis from IBM to formulate an integrative, comprehensive analysis of human verbal and nonverbal communication. We synthesized information from multiple facets of speaking and applied popular psychoanalytic resources as suggestions for improvement.
We hope that our implementation of this idea helps train public speakers, people affected by socially-inhibiting diseases, or anyone else who is willing to improve by observing oneself.
Table number: 18D