Inspiration

Our team was inspired by the challenges many job seekers face when preparing for technical and behavioral interviews. The process can be stressful, and getting quality, personalized feedback is often difficult and expensive. We wanted to leverage the power of generative AI to create an accessible, intelligent, and supportive tool that acts as a personal interview coach, helping users build confidence and ace their interviews.

What it does

AceMock AI is an intelligent mock interview platform designed to help users practice and improve their interviewing skills. It uses Google’s powerful Gemini AI to simulate a realistic interview experience. The application can generate relevant interview questions for various roles and industries, listen to the user's responses, and provide constructive feedback on the clarity, content, and structure of their answers. The goal is to provide a safe and effective environment for users to practice, identify areas for improvement, and ultimately ace their next real interview.

How we built it

AceMock AI was built as a modern, interactive web application powered by Google’s Gemini AI and a full JavaScript-based stack.

At its core, the platform integrates directly with Gemini 2.5 Flash through the @google/generative-ai Node.js SDK, enabling real-time question generation and feedback analysis. The main logic is handled inside a lightweight Node environment, with all AI calls centralized in the gemini.js module. This module securely retrieves the API key from environment variables, connects to Gemini, and exposes a reusable getAIResponse() function that powers both interview question generation and answer evaluation.

On the frontend, the app is built using React.js, providing a smooth, interactive user experience. The interface, developed in App.jsx, handles user input, text-to-speech (TTS), and speech-to-text (STT) functionality, allowing users to speak their answers aloud. The app then sends their response to Gemini, which evaluates it and returns detailed feedback that includes identified strengths, weaknesses, and example improved answers.

For text-to-speech playback and speech recognition, we used the Web Speech API, giving the app a conversational, hands-free experience. Feedback and questions are dynamically displayed in a responsive modal interface, with smooth transitions and animations for better UX. This helped boost accessibility.

Styling and layout are managed entirely through modern CSS, using custom design tokens, responsive grids, and glassmorphic UI elements defined in index.css. The theme system allows users to toggle between light and dark modes instantly.

We used Vite for fast development and hot module replacement, and @babel with @jridgewell/remapping for browser compatibility and debugging. The project is structured for clarity, separating components (App.jsx), logic (gemini.js), and styles (index.css) while maintaining scalability for future feature additions.

The result is a clean, AI-powered mock interview platform that blends voice interaction, real-time AI coaching, and modern frontend design into a single cohesive experience.

Challenges we ran into

One of the main challenges we encountered was related to our core AI dependency. During development, we learned that the @google/generative-ai SDK we were using has been deprecated in favor of a newer js-genai SDK. This required us to carefully consider our development path and plan for a future migration to ensure long-term stability and access to the latest features from Google's AI models.

We also faced issues ensuring consistent communication with the Gemini API, especially under varying network conditions. Implementing retry logic with exponential backoff helped make our app significantly more resilient. Another challenge was balancing real-time feedback responsiveness with processing quality, which required optimizing API payload sizes and prompt chaining.

Additionally, our team had to pivot midway through the hackathon. We initially started developing a pickleball tracking project using computer vision, however, one of our teammates dislocated their ankle, and our development schedule was severely impacted. With limited time left, we decided to switch to our backup plan, AceMock AI. This required quickly restructuring our project plan, reassigning roles, and shifting our entire focus to an AI-based web application. Despite this sudden transition, we successfully completed a functional prototype before the deadline.

Accomplishments that we're proud of

We are proud of building a functional AI-powered application that directly addresses a real-world problem. Integrating a large language model to provide personalized and constructive interview feedback was a major technical accomplishment. We successfully created a real-time interview feedback loop where the system generates questions, listens to answers, evaluates responses, and provides helpful insights.

We’re also proud of the platform’s reliability and visual polish. The combination of resilient backend communication and a clean, interactive UI made AceMock AI both functional and enjoyable to use within a short development window.

Beyond its technical success, AceMock AI has the potential to make a global impact. The platform can help millions of job seekers worldwide, particularly those without access to expensive coaching or professional mentorship. By offering free or low-cost personalized practice sessions, it empowers individuals to build confidence, improve communication, and perform better in interviews, regardless of their background or location.

Another accomplishment we’re especially proud of is how accessible AceMock AI is. By integrating Speech-to-Text (STT) and Text-to-Speech (TTS) capabilities, we made it possible for users to both speak their answers and hear the questions or feedback aloud, making the platform usable for individuals with visual or mobility challenges, or for those who prefer auditory learning. This voice-driven interaction also creates a more natural, conversation-like experience, allowing users to practice interviews the way they would in real life.

As more people use AceMock AI, we envision it boosting global interview readiness, improving career outcomes, and fostering economic mobility through better employment opportunities.

What we learned

Through building AceMock AI, we gained deep technical experience across the entire JavaScript ecosystem and learned how to design a scalable, production-ready AI application within a short time frame.

Working with Node.js, Express, and the @google/generative-ai SDK taught us how to structure robust backend services that handle real-time communication between the frontend and external APIs. We learned how to manage asynchronous workflows, handle rate limits, and implement retry and error recovery mechanisms using utilities like @humanwhocodes/retry, which strengthened the system’s resilience.

From a backend perspective, we learned how to handle API resilience and error recovery using the @humanwhocodes/retry utility, ensuring that intermittent Gemini API issues wouldn’t break the user flow. We also learned how to use @babel and @jridgewell/remapping for cross-browser compatibility and effective debugging during development.

On the AI side, we developed strong proficiency in prompt engineering and LLM integration, experimenting with how Gemini interprets different question formats, context windows, and feedback prompts. This helped us fine-tune the accuracy and tone of AI-generated feedback.

We deepened our understanding of React.js by using its state management and component-based design to build an interactive, dynamic interface that responds instantly to user actions. Implementing Speech-to-Text (STT) and Text-to-Speech (TTS) using the Web Speech API taught us how to make the app more accessible and voice-driven, creating an inclusive experience for users who prefer or rely on auditory interaction.

We also deepened our understanding of module-based architecture and API routing in Express, ensuring that our backend remained clean, maintainable, and easy to extend for future features like user authentication and role customization. Using @babel and @jridgewell/remapping improved our understanding of JavaScript build pipelines and debugging through source maps.

What's next for AceMock AI

Future feature plans include:

  • Expanding interview support to include technical, behavioral, and case-study question types.
  • Implementing progress-tracking dashboards with feedback reports that visualize growth over time.
  • Adding emotion and tone analysis for spoken responses using speech recognition models.
  • Allowing greater customization of interviews based on company, role, and difficulty level.
  • Exploring a mobile-friendly version of the app for on-the-go interview practice.
  • Adding a compiler so users can input code for technical questions and AI integration to check the output and accuracy

Built With

Share this project:

Updates