Inspiration

In recent years, a lot of effort has been directed to reduce bias in interviews and the hiring process. However, it is hard to totally eliminate unconscious bias as it is hard-wired into the human brain. For example, white job applicants were found to have a higher chance of success (74%) than ethic minority applicants with identical CVs. In the startup world, female founders were found to receive less funding compared to their male counterparts as venture capitalists tended to ask women questions about the potential for losses and men about potential gains.

Some companies tried to reduce this bias and improve workplace diversity by doing blind hiring. They asked applicants to remove their name, address, college name, or graduation date on their resume and assign an application number. We would like to take a step further, and taking unconscious bias out of interviews by creating a blind interview plugin for video conferencing tools.

What it does

The blind interview plugin allows interviewers to change applicants' voices (the same voice change applied to every applicant), and ideally, in the future, it could replace applicants' video images with the same avatar.

Features:

  • Users can log in/ signup with their email and password or log in/ signup using Google/ Microsoft account.

  • Users can view the status of the linked video conferencing tools in the tab.

  • Users can turn on/ off voice change.

How we built it

Low fidelity wireframe using balsamiq

  • During the brainstorming phase, we laid out the features that we wanted and experimented with design - playing around with what fits on which page.

High fidelity wireframe using Figma

  • After finalizing the features and pages that we wanted, our designer used Figma to plan out what our application looks like.

Front-end development using React, Redux, Material-UI, and CSS

  • The frontend developers went in two directions. One of them researched which APIs and technologies are best suitable for our purposes (to remove bias by translating every interviewee's voice to the same human or robotic voice). The other frontend developer created the frontend UI, logic and css.

Back-end development using Node.js, Express, and MongoDB

  • The backend developer created the mongoDB database on Google Cloud to store the user's login information and profile (which includes user options, human or robot voice preference, and the video platforms the extension is active on). The backend developer also created the frontend login for fetching backend APIs.

Voice change using Spoken NPM

  • We considered a number of options before settling for one, which includes Google Voice API, IBM Watson, web audio API, AWS transcribe/ Polly and Azure.

Google Cloud Platform

  • We took advantage of GCP to store the user's credentials and profile. That way, anyone who uses our code will have access to the same database.

Challenges we ran into

Time zone was a challenge, as we have 3 different time zones and some not all developers are awake at the same time. However, we are able to overcome this by effective communication, planning, and use of Whatsapp.

Accomplishments that we're proud of

We have completed the whole UI design (including stretch goals), developed and published the extension as Firefox addon. We're also able to overcome the challenge of timezone differences.

What we learned

We have learnt how to use react for front-end development and TinderCAD/ 3D Slash for creating 3D objects.

We also learned a bunch about google chrome extensions, Firefox addons and voice APIs, as well as increased proficiency in React, Redux, and Node.js.

One of our developers even found a custom Mew Pokemon cursor.

What's next for Optalk

New Features

Video change

Users can select their preferred avatar in the settings (human model/ robot).

Recording

Users can record the screen with and without changes applied.

Avatar

Allow avatars to reflect the emotions and facial expressions via facial features & full motion capture.

Share this project:

Updates