Inspiration
70-80% of self-taught developers freeze in behavioral interviews—not from lack of experience, but inability to translate freelance work into workplace stories. I experienced this firsthand: I had 2 years of freelance projects but blanked when asked "Tell me about a time you resolved a technical disagreement with a teammate." I couldn't reframe "I argued with a client about API design" into "I navigated stakeholder disagreement while maintaining technical standards."
Current solutions focus on technical prep (LeetCode) or cost $100+/hour for coaching. Self-taught developers needed affordable, personalized practice that understands non-traditional backgrounds.
What it does
AI Interview Coach generates 3 personalized behavioral questions using Gemini 3, based on:
- Years of experience
- Tech stack
- Target role
- Biggest interview fear
Users type their answers (500 char max), then receive specific feedback:
- ✅ What worked well (2-3 points)
- 💡 What to improve (actionable suggestions)
- 📝 Optional reframed example
The entire session takes 15-20 minutes. Users practice until answers feel "boring"—the Reddit-validated method for reducing anxiety.
How we built it
Tech stack: Next.js 14, React, Tailwind CSS, Gemini 3 API, localStorage
Gemini integration (2 endpoints):
POST /api/generate-questions
- Sends user background to Gemini
- Prompts Gemini to generate questions that:
- Are specific to self-taught background
- Address stated fear (e.g., "I freeze on failure questions")
- Follow STAR method structure
- Match target role expectations
- Returns 3 personalized questions
POST /api/analyze-answer
- Sends question + answer + background to Gemini
- Prompts Gemini to:
- Check STAR method completeness
- Identify strengths (concrete details, growth mindset)
- Suggest improvements (add metrics, clarify impact)
- Generate reframed example if needed
- Returns structured feedback
Data flow: Browser localStorage only (no backend/auth for MVP)
Challenges we ran into
- Network errors: Users hit timeouts during API calls, losing all progress. Fixed with 25-second timeout wrapper + localStorage recovery before each call.
- Feedback length: Early users said AI feedback was "bloated." Iterated prompts 3 times to reduce length 50% while keeping specificity.
- First-question drop-off: 75% of users quit after viewing Q1. Root cause: intimidation. Once users submit first answer, 100% complete all 3 questions.
Accomplishments that we're proud of
- Shipped MVP in 7 hours (Jan 1, 2026)
- 100% completion rate after first answer (5/5 users who submitted Q1 finished all 3)
- Industry testimonials:
- Christian Deen (40-year contractor): "Helped me practice & refresh my interviewing skills"
- Gideon Aswani (COO, Pathways Technologies): "Helpful for prepping, polishing, and focusing"
- 87.5% would use for real interviews (7/8 users)
- Validated willingness to pay: 50% would pay $10 for tiered access
What we learned
- The problem is structural, not technical. Self-taught devs don't lack experience—they lack a framework to articulate it under pressure.
- Gemini excels at contextual personalization. Generic questions don't work. Questions like "Tell me about a time you learned a new technology under a tight deadline in one of your freelance projects" resonate because they acknowledge non-traditional backgrounds.
- Feedback must be specific. Users rejected generic praise ("Good job!"). They wanted: "Add the business impact—did the client approve it? Were there follow-up projects?"
What's next
- Payment integration: Stripe checkout for $10 tiered access (3/6/12 months)
- Replay feature: Let users revisit past questions and refine answers (requested by 2/5 users)
- Technical interview module: Expand beyond behavioral to system design + coding (deferred until 10 paying customers validate behavioral focus)
Log in or sign up for Devpost to join the conversation.