Inspiration
The inspiration for CodeNudge came from our own journey of preparing for technical interviews. While coding platforms and mock interviews are helpful, we realized there was a gap in personalized feedback that simulates real interview scenarios. We wanted a tool that would not only evaluate the correctness of our solutions but would also push us to think deeper about efficiency, clarity, and optimization, much like an actual technical interviewer would. This sparked the idea of CodeNudge—an AI-powered interviewer designed to nudge you toward better solutions in coding rounds.
What it does
CodeNudge simulates an AI-driven mock technical interview. It presents coding problems, evaluates your verbal explanations, and provides insightful feedback—just like a real interviewer. Beyond just correctness, CodeNudge evaluates your approach based on algorithmic efficiency, clarity of thought, and your use of data structures. It also generates follow-up questions and prompts you to refine your solutions, offering nudges to help you ace the interview.
How we built it
The front-end leverages Bootstrap and JavaScript to create a responsive and seamless user experience. On the back-end, FastAPI and Python are used to ensure efficient and scalable performance. The AI logic is powered by OpenAI’s GPT-4o, which processes candidate responses, evaluates them, and generates detailed feedback and follow-up questions. Additionally, OpenAI’s Whisper API was integrated for accurate speech-to-text conversion, enabling a more immersive interview experience. The AI-powered technical interviewer was implemented using LangFlow, ensuring smooth orchestration between the LLM and the application.
Challenges we ran into
Integrating the Whisper API for speech-to-text conversion felt like trying to teach a cat to fetch—possible, but definitely not easy! However, after some focused effort, we got it to work smoothly. Implementing follow-up prompts within LangFlow was another challenge, especially since it was our first time using the framework. However, through trial and error and a lot of learning, we were able to master LangFlow and implement the feature effectively. The OpenAI integration in the backend, however, was the real head-scratcher. After some extensive debugging and scouring through the documentation (and a few cups of coffee later), we were able to optimize the API interactions and ensure smooth real-time responses.
Accomplishments that we're proud of
We’re incredibly proud of deploying an end-to-end product in less than a day! When we pitched our idea to the mentor, he pointed out that the scope might be a challenge, given that we had only 24 hours to build a functioning MVP. Despite the initial ambitious vision for CodeNudge, we quickly adapted and pared it down to its essential components to focus on experimentation. Tackling such a challenging endeavor, learning unfamiliar tools in record time, and successfully bringing our MVP to life is something we’re truly proud of. Through this project, we’ve learned so much and, most importantly, we believe in our idea. We see CodeNudge evolving beyond the hackathon, growing into a tool that can help students and professionals alike in their journey to career readiness.
What we learned
Through this project, we learned a great deal about designing AI-driven applications, with prompt engineering being a crucial factor in unlocking the potential of an LLM’s output. Crafting precise prompts was key to generating accurate and meaningful responses from GPT-4o. We also gained valuable experience working with the OpenAI Whisper API for speech-to-text conversion, which provided insights into handling complex AI integrations. Additionally, managing real-time interactions between the front-end and back-end taught us essential lessons in performance optimization, ensuring smooth communication between the AI and the user. These experiences deepened our understanding of how to simulate the flow of technical interviews in a digital environment while maintaining a seamless user experience.
What's next for CodeNudge
We plan to expand CodeNudge by adding more coding problems with varying difficulty levels and implementing detailed performance metrics, allowing users to track their progress over time. Additionally, we aim to broaden CodeNudge to cover multiple technical tracks, including software engineering, data science, and other in-demand tech roles. Given the limited time during the initial build, we focused on assessing the verbal approach candidates used to solve problems. The bigger vision is to simulate a code-based interview where candidates can submit their code for evaluation, and the model will assess the neatness and efficiency of the code.
Moving forward, we plan to refine this further and enhance the flow of hints and follow-up questions. We also want to introduce three distinct interviewer avatars, with an option for random selection, to create a more realistic and dynamic interview experience. Expanding the question database and improving the AI’s ability to provide personalized suggestions based on each candidate’s unique coding style are key priorities as we continue to grow CodeNudge into a comprehensive interview preparation tool.
Built With
- fastapi
- gpt-4o
- langflow
- python
- whisper
Log in or sign up for Devpost to join the conversation.