Inspiration
Inspired by LeetCode, which helps users with technical interviews, we wanted to address the gap in support for preparing for behavioral interviews and that is why we created LeetSpeak. Many candidates, especially in underserved school systems, struggle with behavioral interviews and are unable to get timely and precise feedback on how to improve, making it difficult to get their dream jobs even if they are technically experienced. We wanted to create a tool that makes education regarding the interview process more accessible and provides structured, interactive, and insightful practice helping users gain confidence and improve performance. Ultimately, this levels the playing field and allows everyone to have access to the resources they need to impress in interviews.
What it does
LeetSpeak is a web-based platform that helps users practice behavioral interview questions. It provides 100 categorized common behavioral interview questions to choose from and allows users to record their responses. The platform provides instant feedback based on the STAR method usage (explaining answers in the context of Situation, Task, Action, and result), confidence, and filler words. LeetSpeak provides a score of users’ responses, allowing them to track their improvement over time and written feedback to address users’ strengths and weaknesses.
How we built it
We built LeetSpeak using JavaScript (Node.js + Express) for the backend and plain HTML/CSS/JavaScript for the frontend, with a Python Flask server handling ElevenLabs voice integration. The user records their answer via the browser's MediaRecorder API, which gets transcribed by Google Cloud Speech-to-Text, then analyzed by Featherless (Llama 3.1 70B) which returns a full STAR breakdown, confidence score, filler word count, and a rewritten improved answer. That improved answer gets sent to ElevenLabs via Flask to be spoken back to the user. All session data is stored in Cloud Firestore with Firebase Auth tracking progress per user, and a user can also input their resume for speaking point tips using pdf reader and Featherless.
Challenges we ran into
We ran into several technical challenges throughout the hackathon. Managing a shared GitHub repository across four people led to frequent merge conflicts, particularly on package.json and package-lock.json, which required careful rebasing and coordination to resolve without overwriting each other's work. We also ran into limitations with Google Cloud Speech-to-Text around audio length — longer answers would hit the API's time limit, which we had to troubleshoot by routing audio through Google Cloud Storage to handle longer recordings properly. Finally, getting fast, reliable responses from Featherless was a challenge — the 70B model produced the best quality STAR scores and improved scripts but was too slow for a real-time demo experience, so we had to experiment with model sizes, reduce max token limits, and optimize our prompt structure to get response times down to an acceptable level.
Accomplishments that we're proud of
We're proud of building a fully working end-to-end pipeline in under 24 hours — from mic recording and AI analysis all the way to voice playback. The STAR scoring system gives meaningful, specific feedback on each component of the user's answer rather than generic tips, and the resume feature personalizes the improved answer to the user's actual experience by referencing their real projects and accomplishments. Our UI/UX is clean and intuitive, inspired by LeetCode but diverging in a way that makes it feel like a natural sister app for behavioral interview prep. Completed questions show a persistent scorecard with a full S/T/A/R breakdown, giving users a clear picture of their progress over time. Most importantly, the ElevenLabs voice playback allows users to actually hear a polished version of their own answer spoken back to them — something no other interview prep tool does — so they can internalize not just what to say, but how to say it.
What we learned
Through building LeetSpeak, we learned how to effectively use APIs and sponsor-provided tools, which were essential for implementing features like text-to-speech and audio processing. We also gained experience working with full-stack development, integrating frontend and backend components, and handling real-time user input. Additionally, we learned how to process and analyze audio data to extract meaningful insights such as confidence and filler word usage. This project strengthened our skills in debugging, collaboration, and time management in a fast-paced hackathon environment, as well as designing a user-friendly interface that creates a realistic and engaging interview experience.
What's next for LeetSpeak
With more time, we would add the ability to analyze a user’s resume and provide feedback on how they could incorporate specific experiences or skills into their answers. We would also expand the platform with more questions and categories, include AI-generated sample answers for reference, create a mobile-friendly interface, and improve the scoring system to provide more nuanced and detailed analytics.
Built With
- css
- dotenv
- eleven-lab
- express.js
- featherless
- firebase
- firestore
- git
- github
- google-cloud
- html
- javascript
- mediarecorder
- multer
- node.js
- pdfreader
Log in or sign up for Devpost to join the conversation.