Inspiration

Interviews can be scary. Whether in person or on zoom, interviewing well takes practice. Communication is the key in any interview, but often interviewees struggle with maintaining open and positive body language while focusing on the quality and content of their answers. Especially for technical interviews, where not only your interpersonal skills are tested, but so is your understanding of various technical concepts. The best interview advice would address the substance the interviewee's answer and their delivery. Because at the heart of it, interviews are a conversation, and no conversation is complete without human expression.

What it does

IntervYou is a self-paced interview practice website. Users can select what kind of interview questions they're preparing for and record themselves answering the question. This video and audio recording is sent to Hume AI, an AI toolkit to understand human emotional expression, and OpenAI's large language model generative pretrained model, ChatGPT. Hume AI's facial expression and speech prosody models will capture and evaluate the nuances of the user's facial movement and vocal expression. IntervYou will then identify parts of the user's answer where they seemed to experience intense emotions that the user can review. In addition, IntervYou sends a transcript of the response (uhhs and ums included!) to ChatGPT for assessment of the accuracy and strength of the user's answer. After answering three questions, users have access to this advice from ChatGPT and emotional insight from HumeAI.

How we built it

Front-End: Reflex using custom Javascript and React components

Back-End: Reflex backend using HumeAI API and ChatGPT API

Challenges we ran into

  • HumeAI Prediction Parsing: Initially, the volume of results generated by the HumeAI batch job was overwhelming. With several layers of nesting were unsure of how to access and best capture the raw data. After several attempts of mapping out the nested data structures and relying on the docs for a baseline understanding of the qualitative significance of the data, we set thresholds for significant data points and saved them in a manageable data structure.
  • Learning a new web development framework
  • Optimizing time cost to handle multiple API requests and process the data
  • Figuring out how to factor both logical and emotional contents from audio-visual components of a video to give feedback to the user ## Accomplishments that we're proud of
  • Created a multi-threading algorithm for our API data handlers in order to speedup and increase throughput from multiple asynchronous processes
  • Integrating HumeAPI’s sentiment analysis with the logical analysis from ChatGPT to give advice to the interviewee in both soft and hard skills ## What we learned
  • How to develop a full-stack web application in Reflex
  • Using multi-threading and asynchronous calls in a webdevelopment environment
  • How to use Sentiment Analysis API and LLM to solve a real-world problem ## What's next for IntervYou
  • Improve UI/UX of the website
  • Gather user feedback
  • Extend HumeAI usage to include other insightful models, i.e., Vocal Burst
  • Continue improving the webapp's algorithm for faster and better results

Built With

Share this project:

Updates