All four developers of this app are new college students, so taking interviews has become a regular part of our lives. From college admissions to internship opportunities, interviews have become a crucial step in selection processes. With Covid-19 forcing interviews to take place online and virtual interviews becoming a staple in the job and internship search process, we decided to create InterviewOwl so that students can practice this interview process and gain constructive feedback based on quantitative measures.

What it does

InterviewOwl simulates a virtual interview experience. The user first selects questions that they wish to include in the interview from a large list of the most commonly asked questions. Once the user starts the interview, these questions will be asked one-by-one by the app. The user can view themselves on the screen as they speak, simulating a typical online interview. After stopping the interview, the user will receive a report on their performance. In this report, InterviewOwl will provide information on the user's volume and number of pauses, how many times a filler word was used, and a report on the user's eye contact based on their eye and head movements during the recording session. The interviewee will also receive a transcript of their responses.

How we built it

We used node.js and express for backend development. For the volume and filler word analysis, we used a Javascript API called Web Speech, which allowed us to track volume inflections and produce a speech-to-text transcript. For the eye tracker analysis, we used a Javascript API called GazeCloudAPI. For frontend development, we used CSS.

Challenges we ran into

We split up the tasks so that each developer focused on either a report feature or the layout of the website. The biggest challenge that we faced was merging all of our code. Especially since we were not able to work in person, merging the code and re-formatting took a while to complete. Another challenge was getting the report features such as filler word analysis, volume analysis, and eye-tracking analysis to work properly and return accurate data. Especially since some of us are new to JavaScript, successfully implementing these features was a challenge. Additionally, while we were developing the eye-tracking feature, we ran out of GazeCloudAPI calls.

Accomplishments that we're proud of

One accomplishment that we are proud of is developing a product that would be useful for a wide audience. We can even picture ourselves using InterviewOwl frequently to prepare for our future interviews. We are also proud of all the feedback that our app is able to provide regarding a user's interview. We believe that this feature allows interviewees to consistently improve on their performance and track their progress.

What we learned

We do not have much experience with Javascript, so developing InterviewOwl provided us with hands-on experience to learn more about Javascript functions and classes. We also learned how to use Javascript APIs to enhance our programs. Additionally, we learned how to efficiently work together and collaborate remotely.

What's next for InterviewOwl

In the future, we hope to increase the accuracy of the report features on InterviewOwl. Especially for the volume analysis, we hope to enhance the ability of InterviewOwl to detect volume changes from the interviewee. Additionally, we would like to allow the user to save their report and refer to it whenever they wish.

Share this project: