What it does
When the user calls our phone number, they can choose the social situation that they would like to practise, the options are currently: job interviews, and smalltalk. Once they have selected one, they will be connected to a designated lexbot that will simulate the conversation. The audio from this conversation will be stored and analysed and the user is texted a uuid of the conversation. The user cam then visit, the main site, and using their uuid review analytics of their conversations. They can practice as much as they want, wherever they are over the phone, and review the results later.
How we built it
We used the nexmo api for the number and used their lex connector to connect to a lexbot. Microsoft cognitive services was used for sentiment analysis and key phrase detection. The webhooks in this project and main website were hosted from a nodejs server. Google speech api was used for audio-text conversion to allow transcription and analysis.
Challenges we ran into
Connecting everything together. AWS environment faults.
Accomplishments that we're proud of
It works and is so so so coool.
What we learned
We can make cool things.
What's next for PhonyConvo
More useful situation based analytics, so for mock interviews, the user could specify things they wanted to have mentioned and can receive results that show how they stacked up. Accurate separation of text analysis of chatbot and user speech.
Log in or sign up for Devpost to join the conversation.