Beginning when users can select question to answer.
User has begun speaking
Results page that displays list of topics. Also how many times fillers which in this case was 'like','so', and 'yeah'. Gave small suggestion
As a junior who's always actively looking for that dream of interviewing and working at Google I imagined not only technical interviews, but every interview that happens everyday. How do you prepare for it? I look around at those in Detroit who actively look for more positions everyday. More importantly how do you prepare when you may not have the resources like mock interviews. So I thought why do your own mock interview and have computers analyze it for you.
What it does
Innerview records your answers to each common interview questions. After each answer it gives you a summary of how well you did. Specifically about how sentimental you were or how many filler words did you use? On top of that it lists topics in your answer. Were you consistent in a topic or jumping around?
How I built it
I used Microsoft's amazing Cognitive Services. So much of the amazing machine learning could be access through API calls. On top of the API calls I used android as the platform to integrate these calls. Which by default gives me access to resources such as microphones and cameras.
Challenges I ran into
The API calls were tricky to configure. Android also had multiple compatibility issues with the different libraries. I had to constantly change the targetSDK which also meant I changed which UI resources were available. The mess was recursively building.
Accomplishments that I'm proud of
Finding something I never expected to really be all that interested in. I barely slept because I was always so excited to get the next feature in. Also building a app always is a great accomplishment in my book.
What I learned
I learned a lot about how API calls work and how to efficiently call them to share mobile resources better. There was also a lot I learned about permissions and how a lot of them changed in newer Android versions. Also I learned a lot about Microsoft as a company because I never realized how much software they've developed beyond just the famous Windows we see everyday.
What's next for InnerView
Currently InnerView is a speaking app, but it can expand into a visual realm where it analyzes speech and your body language and facial emotion. Microsoft has more APIs for these as well and bringing them all together provides a continuous and easy development in one system. InnerView can also provide more suggestions to improve, there can be extra criterions to determine this.