Inspiration
Preparing for technical interviews can be intimidating, especially for students who don’t have access to frequent mock interviews or detailed feedback. Many candidates know the material but struggle with communication, confidence, and structuring their answers under pressure. We were inspired to create RubberDuck2.0 as a tool that lets students practice interviews in a safe environment, reflect on their performance, and improve both their technical and behavioral skills before a real interview.
What it does
RubberDuck2.0 is an AI-powered interview practice platform designed for software engineering and computer science students.
The platform allows users to:
- Practice technical and behavioral interview questions
- Record spoken answers, which are then transcribed to text
- Receive AI-generated feedback and scoring on their responses
- Solve LeetCode-style coding questions directly on the platform
- Get vision-based coaching insights such as presence, smile, and professionalism
By combining spoken explanations, written code, and feedback, RubberDuck2.0 simulates a realistic interview workflow.
How we built it
We built RubberDuck2.0 as a web application using:
- React and TypeScript for the frontend
- Vite for fast development and hot reloading
- Browser APIs (
MediaRecorder,getUserMedia) for audio recording and camera access - Speech-to-text to transcribe recorded answers
- Large Language Models (LLMs) to generate interview questions, evaluate responses, and provide feedback
- An integrated coding editor to support LeetCode-style technical interviews
- Lightweight computer vision metrics to provide real-time behavioral coaching tips
Challenges we ran into
- Achieving reliable speech-to-text accuracy, as early transcription attempts often failed to correctly capture spoken answers
- Experimenting with ElevenLabs for speech processing, which did not perform well for our use case, leading us to switch to OpenAI’s Speech-to-Text API
- Accurately detecting facial expressions such as seriousness and subtle smiles, which proved difficult due to lighting, camera quality, and expression variability
- Attempting to build a live, real-time conversation with the AI using LiveKit, which introduced integration complexity and stability issues
- Deciding to rely on Gemini’s API for interview flow and question generation after it proved more reliable within the hackathon timeframe
Accomplishments that we're proud of
- Building a complete interview simulation experience from scratch
- Successfully combining voice answers, code submissions, and AI feedback
- Integrating a LeetCode-style coding workflow into an interview setting
- Designing a clean and intuitive UI that feels supportive rather than stressful
- Implementing behavioral coaching metrics alongside technical evaluation
What we learned
- Interview preparation is as much about communication and confidence as it is about technical skill
- Audio handling and asynchronous flows in the browser require careful design
- Security best practices (environment variables, key management) are critical even in prototypes
- Thoughtful UX decisions can greatly improve how users perceive AI-driven tools
What's next for RubberDuck2.0
In the future, we’d like to:
- Add real-time transcription and live voice conversations
- Provide deeper analysis of coding solutions and optimization strategies
- Offer interview tracks tailored to specific companies and roles
- Add progress tracking to help users measure improvement over time
RubberDuck2.0 aims to help students walk into interviews feeling prepared, confident, and supported 🦆💻✨
Built With
- gemini
- openai
- react
- typescript
- vite
Log in or sign up for Devpost to join the conversation.