As university students who have had interviews for summer job opportunities, we are very much aware of the nerve wracking task of presenting yourself in the best way. We have become aware that body language and non-verbal queues are paramount in impressing an interviewer. As such, we have tried to create a software for users to practice with mock interviews and provide feedback based on computer vision and speech analysis.

What it does

The current iteration asks the user 2 interview questions, displays their emotions, eye contact, level of movement, the number of stall words used, and pace of speech in real time on a dashboard. Once the user completes their session, they can see all of their results as well as tips on which areas to improve.

How we built it

This was built using a Node server, using Express for routing, for communicating between the front end and the backend, a mongodb database, as well as microsoft FACE api and their Bing Speech api.

Challenges we ran into

Accomplishments that we're proud of

What we learned

What's next for interView

Share this project: