Inspiration

Behavioral interviews are one of the most high-stakes parts of the job search, yet most people practice them alone in front of a mirror with no real feedback. We wanted to build something that simulates the pressure of a real interview, gives you the honest coaching that a human interviewer never would, and lets you compete and learn alongside others.

What it does

WINterview is a full-stack behavioral interview practice platform. You sign in with Google, select questions from a library of hundreds of behavioral prompts (filterable by category, mastery level, or company), and record a spoken answer. The app transcribes your voice via ElevenLabs STT, analyzes your pacing and filler words locally, then sends your transcript to Google Gemini for AI scoring across five non-overlapping rubric dimensions: structure, specificity and depth, delivery, relevance to the question, and reflection. Each metric gets a score with actionable one-sentence feedback, a pacing timeline chart, and a summary verdict. You can track mastery of questions over time on your profile, share your responses anonymously to the community feed for peer ratings and comments, and earn karma by giving useful feedback to others. There's also a multiplayer mode! Users can create or join a real-time competition room where multiple users answer the same questions simultaneously and are scored on a live leaderboard.

How we built it

  • Frontend: React + Vite + TS + Tailwind
  • Backend: FastAPI + Motor (MongoDB)
  • Auth: Google Sign-In
  • Voice: ElevenLabs for both TTS (AI interviewer reads questions aloud) and STT (transcribes answers)
  • AI analysis: Google Gemini (for both analyzing responses and determining tags/topics that match with a given company)
  • Multiplayer: WebSocket-based real-time rooms
  • Database: MongoDB Atlas
  • Deployment: Vercel (frontend) + Render (backend)

Challenges we ran into

Getting Gemini to return consistent, schema-valid JSON under all conditions required careful prompt engineering and a robust extraction fallback. Aligning ElevenLabs word-level timestamps with locally computed WPM buckets for the pacing graph also took considerable tuning. Furthermore, building the WebSocket room system with graceful reconnection, host-migration on disconnect, and round synchronization was the most complex piece of the backend. Finally, coordinating company-specific question generation, fetching Brave Search snippets, feeding them through Gemini, and filtering out non-behavioral results required several prompt iterations

Accomplishments that we're proud of

We came up with what we believe to be a high quality five-metric rubric that isolates truly distinct dimensions of a behavioral answer, so it's not just a single valuethat acts as the verdict for the whole response. Also, we managed to get real-time multiplayer rooms where dozens of users can compete on the same questions with a live leaderboard. Creating a community feed with anonymous sharing, peer ratings, comment karma, and like/dislike reactions also interesting and rewarding.

What we learned

We learned how to use MongoDB to store and query our data, and gained hands-on experience with WebSockets by building the multiplayer rooms feature from scratch, which taught us a lot about managing real-time communication between multiple users at once.

What's next for WINterview

  • A video mode that records video answers and analyze eye contact, posture, and facial expression
  • A conversational AI that follows up with probing questions based on your answer
  • Parse an uploaded resume and generate questions tailored to the candidate's specific experience
  • Use an ML model trained on labeled data to more accurately compute/verify AI-powered ratings

Built With

Share this project:

Updates