Inspiration

Interview prep tools either give generic advice or simulate interviews without meaningful evaluation. I wanted something that mimicked how real interviews actually work: structured questions, honest scoring, and targeted improvement.

NextRound was inspired by the gap between practicing answers and actually getting better. The goal wasn’t to build another question generator — it was to create a system that evaluates performance, identifies weaknesses, and forces deliberate practice.

At its core, NextRound follows a feedback loop:

$$ \text{Answer} \rightarrow \text{Evaluation} \rightarrow \text{Weakness Detection} \rightarrow \text{Targeted Practice} $$

That loop is what actually helps candidates improve.

What it does

NextRound is an AI-powered mock interview platform that simulates realistic interview conditions and delivers structured, rubric-based feedback.

Key capabilities:

  • Generates behavioral and role-specific interview questions
  • Evaluates answers using structured scoring:
    • clarity
    • structure
    • depth
  • Produces actionable improvement feedback
  • Detects weakest performance areas
  • Generates targeted follow-up questions
  • Supports session-based interviews (multi-question loops)
  • Includes AI voice mode for immersive practice (questions + feedback)

The scoring model approximates structured evaluation:

$$ \text{Score} = w_s S + w_c C + w_d D $$

Where:

  • (S) = structure quality
  • (C) = clarity
  • (D) = depth

How we built it

Frontend

  • Next.js 16 (App Router)
  • React 19
  • TypeScript
  • Tailwind CSS
  • Framer Motion
  • Custom dark-mode design system (Inter font, minimal UI)

AI & APIs

  • Google Generative AI SDK (Gemini)
  • OpenRouter (multi-LLM fallback)
  • ElevenLabs (voice synthesis)

Supported models:

  • Gemini 2.0 / 2.5 Flash
  • Claude 3.5 Sonnet
  • GPT-4o Mini

Architecture

  • Server Actions + API routes
  • Client/server component separation
  • Provider abstraction layer (multi-AI routing)
  • Question bank + AI generation hybrid
  • Runtime validation with Zod

State & Sessions

  • React hooks
  • Local storage persistence
  • 5-question interview session tracking

Runtime

  • Node.js backend
  • Edge-compatible design

Challenges we ran into

1. Static feedback problem
Early versions produced identical scores regardless of the answer.
Fix: redesigned prompts to enforce rubric reasoning and structured evaluation.

2. AI quota + token limits
Multiple calls per answer quickly exhausted daily limits.
Fix:

  • consolidated prompts
  • response caching
  • fallback to OpenRouter providers
  • lighter model routing

3. Targeted question accuracy
Follow-up questions were initially random.
Fix: weakest-dimension detection:

$$ \text{Weakest Dimension} = \arg\min (S, C, D) $$

Then generate a focused improvement question.

4. Voice UX design
Reading full feedback aloud overwhelmed users.
Fix:

  • summarized spoken feedback
  • faster playback
  • focused improvement insights only

5. Question normalization bugs
AI responses sometimes returned nested JSON or malformed objects, breaking the UI.
Fix: recursive parsing and validation pipeline.

Accomplishments that we're proud of

  • Built a full AI interview loop solo from idea to production
  • Created dynamic rubric scoring instead of generic feedback
  • Implemented multi-AI fallback architecture
  • Added real-time voice interviewer experience
  • Designed a consistent dark UI system across the app
  • Enabled session-based practice rather than isolated questions
  • Delivered targeted follow-up generation based on real weaknesses

Most importantly: the system forces improvement, not just practice.

What we learned

  • Feedback quality matters more than question quality
  • AI reliability requires fallback routing and validation
  • Prompt design determines whether outputs feel intelligent
  • Voice interaction dramatically increases immersion
  • Structured evaluation beats open-ended coaching

Interview prep isn’t about exposure — it’s about iteration:

$$ \text{Practice} \rightarrow \text{Feedback} \rightarrow \text{Adaptation} $$

What's next for NextRound

Short term

  • Firebase + persistent user history
  • Performance tracking over time
  • Shareable interview reports
  • Resume-aware evaluation
  • More advanced role-specific question pipelines

Medium term

  • Live conversational voice interviews
  • Recruiter-style probing follow-ups
  • Behavioral vs technical evaluation engines
  • Personalized difficulty progression

Long term vision

NextRound becomes an adaptive system where:

$$ \text{User Skill} \uparrow \Rightarrow \text{Question Difficulty} \uparrow $$

A platform that continuously evolves with the candidate — from first practice to real interviews.

The goal is simple:

Help people get to the next round.

Built With

Share this project:

Updates