Introduction
Hello! We are William, Grant, and Malcolm and we are here to demonstrate the applicability of Large Language Modeling (LLM) for dynamic conversations between humans and AI.
Inspiration
Every day, new technologies come out to help students prepare for technical interviews. Tutorials and walkthroughs are a consistent feature, but these applications miss out on a large chunk of the recruiting process: the behavioral interview. Moreover, technical interview technologies suffer from being narrow, whereas a behavioral guide has wide applications.
What it does
Our app allows users to talk to practice interviewing by answering questions on the screen and receiving feedback and new follow-up questions. It has a timing feature to allow the user to stay up to date with how quickly they answer, and it scores them on their words per minute, their rate of filler words, and their pauses in speech. It allows asks an LLM for dynamic feedback, using their response as a prompt for a holistic grade.
How we built it
We built it by beginning on the create-react-app template, and jumping off with different libraries. **Our service is comprised of front-end only This minimizes the long transfer times for audio files. The UI is built of various react components, some of whom additionally handle logic and API calls. This allows the program to be lightweight and accommodate large models and files without excessive back-and-forth between a server and front-end. We routed using the react-router-DOM library, which when combined with webpack, only creates one bundled html file which allows for different paths and routes.
Challenges we ran into
The create-react-app software comes with a large number of unnecessary packages that disrupt imports, and we had to handle the asynchronous nature of API calls coupled with the additional time required to dictate audio files. Additionally, passing props between react components, while efficient, can be complicated and involves the use of carefully-monitored global variables. We needed to abuse react functionality like hooks and automatic rendering to provide dynamic fedback.
Accomplishments that we're proud of
-Handles speech-to-text using Assembly AI -Asks questions based on the context of the users responses using NLP -Gives feedback based on the users words per minute, among other factors -Provides audio recording functionality and a timer to simulate interview pressure
What we learned
-Audio files are slow to send and receive for instant feedback -Different types of asynchronous calls are liable to come at different times, so developers cannot depend on alternating structures between the calls -NLP text-to-text is much faster and offers high quality responsiveness
What's next for Interview4.tech
-Video feedback on camera quality, appropriate dress, gestures, and eye contact -User sign in to accumulate feedback and generate a profile -Piece-by-Piece dictation to improve latency
Log in or sign up for Devpost to join the conversation.