We are inspired by our personal experience of not succeeding in video interviews to get to the next stage of recruitment process. The project will contribute to simplifying the early stages of recruitment process by providing a quick review on candidate’s presentation and a few key qualities.


We conducted surveys on different platforms to understand whether people would be interested to use an application that gives feedback to help them improve to face video interviews and the results are posted below.


Figure 1. Survey responses from Facebook poll by Monash University students


Figure 2. Survey responses from Google form by Unihack-2019 contestants


Figure 3. Survey responses from other social media

The survey results by a diverse group of people shows that majority of candidates applying for various job positions would like to use an application to review their basic video interview skills.

What it does

It allows the candidates to take an interview very similar to a live video and give live feedback on their emotions, eye contact and other key qualities. It also allows the candidates to get human feedback.

How we built it

We built it using React as the front end with libraries:

  • Materialize
  • Axios
  • React-Webcam We also used Google Cloud Vision API for the analysis of the image frames. We called the REST Api from Javascript to get the data about the frame. We also wrote microservices in Python Flask that were deployed to do textual analysis from the speech recorded. However, the microservices could not be used due to time constraints preventing us from doing speech to text in the given time. The service is still functional.

Challenges we ran into

Android did not have any libraries for recording a video while capturing image frames. Hence it was a tradeoff between providing real time feedback and the feature where someone else can review and provide feedback. Once that was over, we decided to migrate to React JS which allows us to record a stream and also capture frames using the inbuilt browser API's. However, we ran into issues with converting a BLOB stream to a video format, hence we decided to focus on real time feedback and frame capturing to provide feedback. A few small configuration issues with package.json and Android dex files came by too.

We also faced the challenge of scope creep however due to mentors and the team was very focussed. The opportunities are endless here, e.g. (live stream and feedback from humans) however a narrow scope ensures a good product in the given timeframe.

Design is something we focussed on but we had to seek a lot of help to get to a usable UI. Design sprints were very helpful however the team couldn't agree on a particular design, however we did manage to find a resolution.

Accomplishments that we're proud of

Doing multiple iterations in two different frameworks (Android and Web) in the 24 hours. Creating a prototype of a product that is functional enough given the timeframe. Working through technical challenges described above and staying up all night wasn't a good combination. However, the team's perseverance paid off in the end and everyone contributed to the project.

What we learned

We learnt a lot about managing scope and expectations upfront. A team learning experienece to come together from different backgrounds and working with each other. We also learnt about limitations of technology and how that can be a drawback. Hence, it is vital as a team to revise expectations and manage time. It was a great learning experience, for us all individually and as a team.

What's next for iValuate

We want to extend it to giving live feedback and extend it to conducting a live interview. We are looking to partner up with companies seeking to hire candidates. We are also suggesting this can be used in broader scopes where an individual can improve their public speaking.

Built With

Share this project: