Inspiration

The path to joining top climate tech companies like Watershed feels arduous. How could we possibly stand out against thousands of talented candidates vying for the same roles? Our group realized the key was practicing interviews against dynamic, realistic personas - not just scripted question-and-answer sessions.

Conventional practice paled in comparison to a seasoned interviewer's probing follow-ups and personality shifts. That's when inspiration struck - creating an AI system that actively listened to our backgrounds and responded as fully adaptive personas, from savvy investors to free-spirited designers. With our phone screen agent, we could hone our skills against a litany of reactive personalities until we were truly confident and poised for any interview scenario. It was the spark we needed to secure our destiny at the climate vanguard.

What it does

Interviewing is nerve-wracking, especially with the unpredictable back-and-forth you'll encounter. Our phone screen agent helps you prepare by creating an interactive, realistic practice experience tailored to the role you're pursuing. Provide your resume and the job details, and our AI will role-play as an interviewer - asking personalized questions based on your background, listening to your responses, and replying with intelligent follow-ups, just like a real interviewer would.

But it gets even better. The agent can take on multiple distinct interviewer personas, from a fast-paced Wall Street type to a creative director, allowing you to prep for different styles and curveballs you may face. Unlike scripted practice, this dynamic interaction forces you to think critically and articulate your thoughts clearly. You can practiceas much as needed, honing your skills in a low-pressure environment before the real deal. With this tool, you'll walk into interviews calm, prepared and ready to genuinely converse and impress your potential employer.

How we built it

We integrated OpenAI's Whisper for speech-to-text transcription and the Text-to-Speech model for voice synthesis with Anthropic's Claude3 LLM (Opus/Sonnet/Haiku). Python code orchestrates data processing pipelines:

  • Extract text from job descriptions/resumes using PyPDF2.
  • Use Claude to generate distinct interviewer persona descriptions.
  • Fuse personas with job details using Claude's few-shot learning.
  • Generate interview questions from fused personas/resumes with Claude.
  • Define good/bad answer criteria for each question using Claude.

The modules enable voice-based conversational practice interviews, with Whisper transcribing user responses and Claude controlling the interviewer persona's voiced replies from OpenAI TTS based on the fused persona/job context. This provides realistic, dynamic interview prep across diverse roles.

Challenges we ran into

  • Integrating all these different models from OpenAI and Anthropic
  • Developing adequate questions and prompts that are personalized
  • Ensuring personalized back and forth in the interview

Interesting/Entertaining Things

  • We noticed emergent phenomena: the AI agent would sometimes cut the human off and apologize and ask a follow-up question even when the human is in the middle of talking
  • We noticed that the Claude model also describes facial expressions when responding. This was super interesting and suggests that embodied robotics will be powerful.

Accomplishments that we're proud of

  • The agent is realistic and follows up questions seamlessly with all the knowledge
  • The questions are a seamless back and forth and can ask things not explicitly programmed
  • We were really happy with the quality of the responses

What we learned

  • Orchestrating multiple AI systems together from different companies
  • Creating real-time conversational agents

What's next for InterviewPilot.AI

  • Progressively creating difficulty levels for the different personas from 1 level to the next from initial screen to hiring manager to make it so people can gamify the experience
  • Create an innovative front-end interface such that the interviewee

Built With

Share this project:

Updates