We were driven by the need to address bias in hiring — especially how tone, accent, or hesitations can unfairly affect candidates. Inspired by research from Moritz Hardt and Hoda Heidari, we built a tool that not only detects bias but helps correct it.
What it does
iasShield is an AI-powered fairness audit tool for interviews. Users upload a voice or video response. The system:
Transcribes using Whisper
Simulates 3 reviewers with GPT-4:
Neutral HR
Biased evaluator
Fairness auditor
How we built it
Streamlit frontend for fast UI
Whisper for speech-to-text
GPT-4 with prompt-engineered personas
Bias score generated by comparing persona reviews
(Optional) Emotion analysis using OpenCV + DeepFace
Entirely built in Python, deployed on Streamlit Cloud
Challenges we ran into
API intergration
Accomplishments that we're proud of
Adding researched frameworks
What we learned
What's next for BiasShield
Eastern Communtiy
Built With
- numpy
- python-streamlit-openai-whisper-api-openai-gpt-4-opencv-deepface-pandas
Log in or sign up for Devpost to join the conversation.