Inspiration
Bias detection is a massive challenge in the current landscape of large language models (LLMs). We've noticed that significant legislation is in the pipeline within the EU. Companies may face fines of €30 million or more if they do not comply with the usage and reporting guidelines for AI systems. Recruiting software is classified as a high-risk AI application. Therefore, we wanted to build a system that reduces these risks and makes bias in the software more transparent.
What it does
Bias Buster is a Chatbot interface for HR recruiters that highlights the cumulative amount of bias present in the conversation. To achieve this, we:
- Provide the chatbot with a list of job candidates, each having unique criteria.
- The chatbot outputs a ranking of the candidates, including reasons for the ranking.
- We identify potential bias patterns in the outputs.
- We then display the results of the bias analysis to the user on the front end, including a set of keywords that have contributed to the bias.
- As the user continues interacting with the chatbot, we update the bias present in the conversation, taking into account the context.
- The HR recruiter can now transparently see whether the software's recommendations for job applicants are biased or not.
How we built it
- We used Google Word2Vec embeddings to identify potential bias patterns in the outputs.
- We used Streamlit for the front-end chatbot interface.
- We utilized the OpenAI GPT-3.5 Turbo model for the chatbot outputs.
- We employed effective prompts to ensure GPT-3.5 explains the reasoning behind its answers.
Challenges we ran into
- Measuring bias without using a language model for bias classification.
- Deciding on which bias to detect.
- Accomplishments that we're proud of
- Rapid front-end development.
- Swift business case identification.
- Completing the entire proof of concept (PoC) in just 40 hours.
What we learned
- Effective task delegation based on each team member's key competencies.
- Efficient communication of technical requirements to non-technical users.
- Understanding how bias in LLMs works and the various methods to debias LLMs.
What's next for Bias Buster
- Transitioning the PoC into a user acceptance test (UAT) with recruiters at different companies.
- Implementing an improved bias detection method.
- Enhancing the user interface to meet user requirements.
Built With
- chatgpt
- opeanai
- python
- streamlit
- visual-studio
Log in or sign up for Devpost to join the conversation.