Software engineering interviews are really tough. How best to prepare? You can mock interview, but that requires two people. We wanted an easy way for speakers to gauge how well they're speaking and see how they improve over time.
What it does
Provides a thoughtful interview prompt and analyzes speaking patterns according to four metrics - clarity, hesitations, words per minute, and word spacing. Users can access data from past sessions on their profile page.
How we built it
An Express backend and MongoDB database that interfaces with the Nuance ASR API and IBM Watson Speech-to-Text.
Challenges we ran into
- Determining a heuristic for judging speech
- Working with the speech services through web sockets
- Seeing what we could do with the supplied data
Accomplishments that we're proud of
A clean, responsive UI!
What we learned
Speech detection is really difficult and technologies that explore this area are still in their early stages. Knowing what content to expect can make things easier, and it's cool how Nuance is exploring the possibilities of machine learning through Bolt.
What's next for rehearse
We want to take advantage of Nuance's contextual models by implementing rehearse modes for different use cases. For example, a technical interview module that identifies algorithm names and gives reactive hints/comments on a practice question.