Inspiration

Bias detection is a massive challenge in the current landscape of large language models (LLMs). We've noticed that significant legislation is in the pipeline within the EU. Companies may face fines of €30 million or more if they do not comply with the usage and reporting guidelines for AI systems. Recruiting software is classified as a high-risk AI application. Therefore, we wanted to build a system that reduces these risks and makes bias in the software more transparent.

What it does

Bias Buster is a Chatbot interface for HR recruiters that highlights the cumulative amount of bias present in the conversation. To achieve this, we:

  1. Provide the chatbot with a list of job candidates, each having unique criteria.
  2. The chatbot outputs a ranking of the candidates, including reasons for the ranking.
  3. We identify potential bias patterns in the outputs.
  4. We then display the results of the bias analysis to the user on the front end, including a set of keywords that have contributed to the bias.
  5. As the user continues interacting with the chatbot, we update the bias present in the conversation, taking into account the context.
  6. The HR recruiter can now transparently see whether the software's recommendations for job applicants are biased or not.

How we built it

  1. We used Google Word2Vec embeddings to identify potential bias patterns in the outputs.
  2. We used Streamlit for the front-end chatbot interface.
  3. We utilized the OpenAI GPT-3.5 Turbo model for the chatbot outputs.
  4. We employed effective prompts to ensure GPT-3.5 explains the reasoning behind its answers.

Challenges we ran into

  1. Measuring bias without using a language model for bias classification.
  2. Deciding on which bias to detect.
  3. Accomplishments that we're proud of
  4. Rapid front-end development.
  5. Swift business case identification.
  6. Completing the entire proof of concept (PoC) in just 40 hours.

What we learned

  1. Effective task delegation based on each team member's key competencies.
  2. Efficient communication of technical requirements to non-technical users.
  3. Understanding how bias in LLMs works and the various methods to debias LLMs.

What's next for Bias Buster

  1. Transitioning the PoC into a user acceptance test (UAT) with recruiters at different companies.
  2. Implementing an improved bias detection method.
  3. Enhancing the user interface to meet user requirements.

Built With

Share this project:

Updates