Hireon: Your Unfair Interview Advantage
Inspiration
The modern job market is brutal. For candidates, applying to jobs often feels like throwing resumes into a black hole. When you finally land that coveted interview, the pressure is immense.
We realized that current interview preparation methods fall short. They consist either of generic "Top 50 Interview Questions" lists that ignore your specific experience, or using text-based AI chatbots that fail to capture the real-time pressure of a verbal conversation. Texting a chatbot is not interviewing.
Our goal was to solve the Personalization vs. Realism gap. We wanted to build an AI that doesn't just ask questions, but acts as a genuine sparring partner—one that actually knows you, knows the target company inside out, and talks back in real time. We built Hireon to give job seekers an unfair interview advantage by replacing passive study with active, personalized practice.
What it does
Hireon is a full-stack, AI-native interview preparation platform that uses a unified flow to turn prep into performance.
The platform guides the user through a six-stage lifecycle:
- Input: The user uploads their existing resume and the specific job description (JD) they are applying for.
- Researching: Hireon’s AI doesn’t just read the texts; it acts as a corporate researcher. It analyzes the candidate against the JD, researches the target company's culture and current affairs, and builds a complete profile.
- Prep Pack: The user receives a personalized, one-page strategic plan. This includes a unique "Resume Risk Detector" (identifying holes interviewers will target), a visual skills radar chart, a tailored interview plan, and even a crafted 60-second elevator pitch customized for this specific application.
- Mock Interview: This is the core experience. The user engages in a real-time, bidirectional voice conversation with an AI interviewer. This is not push-to-talk. The AI interviewer can vary its persona, offer follow-up questions, and even activate "Pressure Mode" for aggressive stress-testing.
- Analyzing: Following the interview, the AI reviews the conversation transcript. Crucially, it combines this with periodic webcam snapshots taken during the interview to analyze body language, eye contact, and posture.
- Report: The user receives a comprehensive performance report. This includes an overall readiness score, actionable metrics (Communication, Technical, Confidence), radar charts, behavioral/body language insights derived from the visual analysis, and "Tonight's Crash Plan"—a targeted study guide for immediate improvement.
How we built it
Hireon is built around a modern tech stack focused on high-performance streaming and refined UI. The application is managed by a React 19 state machine frontend and a specialized voice proxy backend.
The Tech Stack
- Frontend: React 19, TypeScript 5.8, Vite 6, Tailwind CSS 4 (Oxide engine), and Motion (Framer Motion) 12 for smooth, premium UI transitions.
- Backend: Express + Socket.IO (serving as the "Sonic Server" WebSocket proxy to Nova 2 Sonic) and
@smithy/node-http-handlerfor managing complex bidirectional HTTP/2 streams. - AI (Foundation): Amazon Bedrock Foundation Models.
The Amazon Nova Architecture
Hireon isn’t just compatible with Amazon Nova—it is architecturally dependent on it. We didn’t chain separate services (e.g., STT -> LLM -> TTS); we used Nova to create a native multimodal experience.
- Smart Research & Reasoning (Nova 2 Lite): We used Nova 2 Lite for every non-voice AI task. Because it handles PDFs, images, and text natively with a 1M token context, it was the only model capable of reading resumes as documents (not raw text) and instantly generating the complex, structured JSON required for the fit maps and risk analysis.
- Live Mock Interview (Nova 2 Sonic): This is the game-changer. Nova 2 Sonic provides native speech-to-speech bidirectional streaming over a single HTTP/2 connection. Most AI voice tools have compounding latency due to multi-service chaining. Nova Sonic has sub-second latency and built-in Voice Activity Detection (VAD), making the mock interview feel like talking to a real person. No orchestration overhead, just natural conversation.
- Performance Analysis (Multimodal Nova 2 Lite): Post-interview, Nova 2 Lite (Extended Thinking) was utilized to perform deep reasoning over the visual data (webcam snapshots) and textual data (the transcript) simultaneously, delivering nuanced feedback other models miss.
Challenges we ran into
1. Building the Bridge (Sonic Backend)
We needed to integrate a modern browser (using Socket.IO audio chunks) with the Bedrock API (requiring a specialized, persistent HTTP/2 bidirectional stream). We had to build a custom Express backend to bridge these protocols efficiently, maintaining sub-second latency while managing the asynchronous handshake with Bedrock’s InvokeModelWithBidiStreamCommand. This was the most significant architectural hurdle.
2. Managing AI Turn-Taking & Deduping
Nova Sonic is extremely fast, which presented a unique issue: duplicate turns during user interruptions and splitting long responses across multiple blocks. We had to architect a two-layer deduplication engine:
- Server Layer: We detected
{ interrupted: true }JSON noise and dropped it mid-stream. We also compared new AI turns to previous turns in real-time. If the text matched (using fuzzy comparison on early characters), we suppressed the entire duplicate block—text and audio—before it ever reached the browser, logging it asAI_TURN_DUPLICATE_SUPPRESSED. - Client Layer: We implemented a 2-second "merge window" to stitch continuation content blocks together into a single bubble, deferring turn finalization to confirm no continuation was coming.
3. Developing the Safeguards
Integrating an AI interviewer required guardrails. We built an "Interview Termination System" that functions at both the AI and client level. We had to program the AI to recognize when a user isn't taking it seriously and end the session professionally. Simultaneously, we built client-side profanity/behavior detectors with a warning counter (2 strikes) that can terminate the interview independently of the AI, complete with a full-screen termination overlay.
Accomplishments that we're proud of
We successfully built a complete application for the Multimodal Understanding category that uses text (resumes/JDs), documents (PDFs), audio (live speech), and images (webcam snapshots) to create a single, elegant end-to-end user experience.
We are incredibly proud to have built an application where native speech-to-speech streaming is the core feature, rather than a bolted-on gimmick. Achieving sub-second latency in a real-world interview setting makes the preparation feel legitimate and high-stakes.
Additionally, unique features like the Resume Risk Detector (which thinks like a cynical recruiter to find your weaknesses) and the fully personalized 60-second elevator pitch generation are features we feel provide measurable value to job seekers.
What we learned
We deepened our knowledge of Amazon Bedrock and modern foundational models, specifically learning how native multimodal models (like Nova 2 Lite) drastically simplify application architecture by replacing document-processing OCR hacks, image analysis services, and text generation steps with a single API call.
We gained extensive experience working with asynchronous streaming architectures. Implementing HTTP/2 bidirectional connections on the backend while maintaining WebSocket state with the frontend was a massive learning experience in real-time data flow management.
We also learned the importance of robust server-side logging for debugging complex AI systems. Our structured per-session logging—tracking every suppressed duplicate, audio stat, and filtered protocol noise—was essential for ironing out the conversation flow.
What's next for Hireon
We have several key features on our roadmap for Hireon:
- Integrated Multi-Tenant Security: Implementing user accounts to securely store past reports and track progress over time.
- Expanded Interview Analytics: Utilizing Nova models to integrate deeper company insights, such as recent news, financial reports, or earnings calls, to create even more tailored mock interview questions.
- Expanded HR Integrations: Exploring possibilities to integrate directly with application platforms, allowing users to import JD links directly.
- Real-Time Visual Feedback: Transitioning body language analysis from a post-report feature to live, gentle visual feedback during the mock interview (e.g., a simple color indicator for eye contact consistency).
Built With
- amazon-web-services
- nova
- react
- tailwind
- typescript
- vite

Log in or sign up for Devpost to join the conversation.