Inspiration

As Tech Students, we understand how stressful interviews can be. They are a crucial step in landing a dream position, and yet stress only make matters worse.

If only you could have prepared beforehand, with a personal, always available coach. Interview performance is most often the determining factor in a hiring situation, so we decided to tackle this problem face on, with Google Cardboard VR and Natural Language Processing.

Interviewing skills are an important asset to have, and we want to help make it a little bit easier for everyone. We considered how we could help solve this issue, and we believe that we have found the way.

What it does

InterVR combines modern VR technology with natural language processing, to simulate a realistic interview situation. By merging powerful yet accessible Google Cardboard VR technology with the advanced Nuance Mix language recognition API and our own context-aware analysis engine, we are able to create an interviewer that not only asks relevant questions but also answers according to what was said.

But not only that, InterVR will also give you feedback on what you did and what you could improve on.

InterVR offers several environments, designed to challenge and engage every interviewee, from beginner to pro.

We believe InterVR offers anyone the tools to become an expert at interviews and gives you a leg up in the competition for a job.

How we built it

The VR portion of our project is built with Unity and runs on an android phone together with a Google Cardboard.

The natural language processing part of the project is built in Javascript. It uses nuance mix speech recognition to understand what you say. It then adds our own conversational agent software to respond in a relevant way. Sentiment analysis allows the interviewer to respond to your mood.

By combining the world-class language recognition framework by nuance with 3D VR rendering from Unity, we believe that we have created a compelling and immersive environment to gain perfection through practice.

Challenges we ran into

Setting up Unity platform with all the SDKs to enable Cardboard VR and Android was definitely a challenge given the WiFi speeds at the hackathon. If we knew we were going to develop in unity before the hackathon, we definitely would have come better prepared. However we made up for this bump in the road with quick learning of the system.

We also had some trouble getting started with speech recognition. We ran into issues with cross-browser compatibility. Therefore, we have chosen to target firefox exclusively for the NLP portion of the project.

Accomplishments that we're proud of

By combining beautiful and realistic, yet free assets from the unity store, we were able to create stunning environments in Unity. This proved to be quite a challenged at the beginning when we were figuring out how to use the software.

We are also very proud of our full client side Natural Language Processing and analysis suite, that we put together using the formidable Nuance API and several open source libraries such as sentimood and speech.js, as well as adding our own special sauce to the mix.

What we learned

Over the course of the hackathon, we all significantly expanded our skillsets. Our Unity team learnt the platform from scratch and acquired valuable skills on how to develop applications for Virtual Reality. Our desktop team learnt how to use Automatic Speech Recognition from Nuance and speech output to build conversational agents, and semantic searching to provide context-aware answers.

What's next for InterVR

We see InterVR expanding in multiple directions. This begins with providing more and more engaging environments to suit and challenge all individuals. We also want to expand the vocabulary of John, the interviewer, to allow him to more accurately respond to new situations. We would also love to offer a female interviewer.

Share this project:
×

Updates