Imagine, for example, a situation where you're the manager of a call centre station, and you want to assure the quality of your workers' calls. But how do you measure the "quality" of each call? And how do you manage the quality of all the calls of all the workers (there may be a lot). This is where sensiment comes in. Tracking call sentiments has never been easier. Sensiment combines two key technologies to serve users a platform where they can input an audio stream and obtain the sentiment data of the audio. There are many applications to this technology beyond call centres, including personal data collection about calls (for example if you wanted to record the emotion level of all the people you are on a call with), and live emotion data collection.

What it does

Sensiment displays live or recorded sentiment information in elegant visual charts to show the sentiment information extracted from audio, There are many applications to this which include call centre quality assurance, individuals to measure their emotions with their friends, and can detect domestic violence/abuse over text and/or live call.

How we built it

We used google's speech-to-text API to convert calls/recorded media to text then utilized IBM's watson NLU api to get different sentiments from the call/text and provide the data in an elegant manner by creating a RESTful api. REST api was built using python flask library and the data was then sent to the frontend built using React JS.

Challenges we ran into

We struggled with connecting the frontend and backend to a web socket to measure sentiment of a live video or voice call. We also struggled with managing the various APIs we used and connecting them all together into one application.

Accomplishments that we're proud of

We were able to successfully complete the batch portion of the application and we were able to connect both APIs in a chain and achieved our goal.

What we learned

We learned that live streaming of data is a very challenging idea and hopefully we find a better method to achieve this. We also learned a lot about the GCP API and we were introduced to the IBM NLU API.

What's next for Sensiment

We plan to finish the live portion of our app so that users can stream data from their calls or from their microphone to and obtain live data about the emotions in their conversations.

Share this project: