Interviews are hard to prepare for, with the assistance of technology, we hope to make the process more informative.

What it does

Through the input of a webcam and a Fitbit, we give instant feedback on the user's emotion, body language, and heart-rate.

How I built it

We used OpenPose to detect the pose of the user and then found the least square solution to match our desired pose. Comparing the two results tells us how similar or different our pose is to the desired pose. In addition, Tensorflow and Keras were used to train a classifier which could detect emotion through facial expressions. To run the classifier and OpenPose, we wrote a script in Python. Fitbit and the Fitbit sdk were used to write a heart rate detecting app. To hold our app together, we used a Node.js server hosted on Azure. Then, the data our Node.js server aggregates are displayed on a React app in real time.

Challenges I ran into

When the hackathon started, we initially wanted to use tensorflow on the OpenPose output, but soon, we realized this was infeasible due to time constraints and a lack of preparation. This had a negative effect on team morale and seriously slowed down our progress.

GitHub Repo

Share this project: