Interview Prep App: A Journey in AI-Powered Learning
💡 Inspiration
The inspiration for this project came from a simple observation: job interview preparation is stressful, expensive, and often inaccessible. Many people can't afford professional interview coaches, and practicing alone in front of a mirror doesn't provide the critical feedback needed to improve.
I wanted to create a tool that democratizes interview preparation by:
- Making it free and accessible to everyone
- Providing intelligent, personalized feedback on responses
- Creating a safe, judgment-free environment to practice
- Running entirely in the browser for maximum privacy
The goal was simple: build an AI-powered interview coach that anyone can use, anywhere, without worrying about costs, privacy, or internet connectivity.
🎓 What I Learned
This project was a deep dive into several cutting-edge technologies and concepts:
1. Browser-Based AI with Transformers.js
The biggest learning curve was implementing AI models that run entirely in the browser using @xenova/transformers. This library uses WebAssembly and Web Workers to execute Hugging Face models locally, which was fascinating to work with.
Key insights:
- Model quantization and optimization for browser execution
- Managing asynchronous model loading and caching
- Balancing model size vs. performance (settled on
flan-t5-smallat ~250MB) - Understanding the trade-offs: $O(n)$ time complexity for local inference vs. API latency
2. React State Management Patterns
Managing complex application state across multiple views taught me about:
- Context API for global state (API keys, configuration)
- Component composition and prop drilling strategies
- State machines for view transitions:
$$ \text{State} \in {\text{CategorySelection}, \text{InterviewSession}, \text{SessionSummary}} $$
3. Accessibility-First Development
Every component was built with ARIA labels, semantic HTML, and keyboard navigation:
<div role="application" aria-label="Interview preparation application">
This taught me that accessibility isn't an afterthought—it's a fundamental design principle.
4. TypeScript Type Safety
Working with strict TypeScript types for AI responses, session management, and component props showed me how type safety prevents runtime errors:
interface SessionSummaryType {
sessionId: string;
category: string;
responses: Response[];
overallScore: number;
}
5. Testing Strategies
Writing comprehensive tests with React Testing Library and Jest, including:
- Component unit tests
- Integration tests for user flows
- Mocking AI models for deterministic testing
- Property-based testing with
fast-check
🔨 How I Built It
Architecture Overview
The application follows a component-based architecture with clear separation of concerns:
┌─────────────────────────────────────┐
│ App.tsx │
│ (State Management) │
└──────────┬──────────────────────────┘
│
┌──────┴──────┬──────────────┐
│ │ │
┌───▼────┐ ┌────▼─────┐ ┌────▼─────┐
│Category│ │Interview │ │ Session │
│Selection│ │ Session │ │ Summary │
└────────┘ └──────────┘ └──────────┘
Technology Stack
- Frontend: React 19 with TypeScript
- AI Engine: Transformers.js (@xenova/transformers)
- Model: Xenova/flan-t5-small (text-to-text generation)
- Testing: Jest + React Testing Library + fast-check
- Styling: CSS Modules with responsive design
- Build Tool: Create React App with custom webpack config
Key Components
1. CategorySelection
Allows users to choose interview categories (behavioral, technical, leadership, etc.)
2. InterviewSession
The core component that:
- Manages the interview flow
- Generates questions using the AI model
- Collects user responses
- Provides real-time feedback
- Tracks session progress
3. SessionSummary
Displays comprehensive feedback including:
- Overall performance score: $\text{Score} = \frac{\sum_{i=1}^{n} s_i}{n} \times 100$ where $s_i$ is the score for response $i$
- Individual question assessments
- Strengths and areas for improvement
- Actionable recommendations
AI Integration
The AI pipeline works as follows:
Model Loading (first use only):
const model = await pipeline('text2text-generation', 'Xenova/flan-t5-small');Question Generation:
const prompt = `Generate an interview question for ${category} category`; const question = await model(prompt, { max_length: 100 });Response Assessment:
const assessmentPrompt = `Evaluate this answer: "${response}"`; const feedback = await model(assessmentPrompt, { max_length: 200 });
Performance Optimization
To ensure smooth user experience, I implemented:
- Lazy loading of the AI model
- Web Workers for non-blocking inference
- IndexedDB caching for model persistence
- Progressive loading states with spinners and feedback
- Debouncing for user input handling
The time complexity for generating $n$ questions with $m$ average tokens is approximately:
$$ T(n, m) = O(n \cdot m \cdot d) $$
where $d$ is the model dimension (512 for flan-t5-small).
🚧 Challenges I Faced
Challenge 1: Model Size vs. Performance
Problem: Larger models (like flan-t5-base at 850MB) provided better responses but were too slow and memory-intensive for browsers.
Solution: After testing multiple models, I settled on flan-t5-small (250MB) which offers the best balance. I also implemented:
- Aggressive caching strategies
- Clear loading indicators
- Graceful degradation for low-memory devices
Challenge 2: Prompt Engineering
Problem: Getting consistent, high-quality questions and feedback from the model was difficult. Early attempts produced generic or irrelevant responses.
Solution: Developed a prompt engineering strategy:
- Specific, structured prompts with context
- Temperature and max_length tuning
- Fallback responses for edge cases
- Iterative refinement based on testing
Challenge 3: State Management Complexity
Problem: Managing state across multiple views (category selection → interview → summary) while preserving session data became unwieldy.
Solution: Implemented a SessionManager service class that:
- Encapsulates session logic
- Provides a clean API for state transitions
- Handles persistence and recovery
- Separates concerns from UI components
Challenge 4: Testing AI Components
Problem: How do you test components that depend on AI models without actually loading 250MB models in tests?
Solution: Created comprehensive mocks:
jest.mock('@xenova/transformers', () => ({
pipeline: jest.fn(() => Promise.resolve(mockModel))
}));
This allowed fast, deterministic tests while maintaining confidence in the integration.
Challenge 5: Browser Compatibility
Problem: WebAssembly and Web Workers have varying support across browsers, especially older versions.
Solution:
- Feature detection and graceful degradation
- Clear browser compatibility messaging
- Polyfills for critical features
- Comprehensive error handling with user-friendly messages
Challenge 6: Privacy and Security
Problem: Users might be concerned about their interview responses being sent to external servers.
Solution: The entire AI pipeline runs locally in the browser:
- No data leaves the user's device
- No API keys or authentication required
- Clear privacy messaging in the UI
- Open-source code for transparency
🎯 Results and Impact
The final application achieves:
- ✅ 100% local execution - complete privacy
- ✅ Zero cost - no API fees or subscriptions
- ✅ Offline capable - works without internet after initial load
- ✅ Accessible - WCAG 2.1 AA compliant
- ✅ Fast - 2-5 second response times after model caching
- ✅ Comprehensive - covers multiple interview categories
- ✅ Educational - provides actionable feedback
🔮 Future Enhancements
Ideas for future development:
- Voice Input/Output: Add speech recognition and text-to-speech for more realistic practice
- Multi-language Support: Expand beyond English
- Custom Question Banks: Allow users to upload their own questions
- Progress Tracking: Store session history and track improvement over time
- Model Upgrades: Experiment with newer, more efficient models as they become available
- Collaborative Features: Share sessions with mentors or peers for feedback
📚 Technical Appendix
Performance Metrics
Average performance on a modern laptop (16GB RAM, M1 chip):
| Metric | Value |
|---|---|
| Initial model download | 45s |
| Model initialization | 3s |
| Question generation | 2.5s |
| Response assessment | 4s |
| Memory usage | ~400MB |
Mathematical Model
The scoring algorithm uses a weighted average:
$$ \text{FinalScore} = \alpha \cdot \text{ContentScore} + \beta \cdot \text{ClarityScore} + \gamma \cdot \text{RelevanceScore} $$
where $\alpha + \beta + \gamma = 1$ and typically $\alpha = 0.4, \beta = 0.3, \gamma = 0.3$.
🙏 Acknowledgments
This project was built with inspiration from:
- The Hugging Face community for making AI accessible
- The Transformers.js team for browser-based AI
- The React community for excellent documentation and tools
- Everyone who believes in democratizing education and opportunity
Built with ❤️
Built With
- css3
- jest
- react-native
- transformer.js
- typescript
Log in or sign up for Devpost to join the conversation.