Inspiration
As AI increasingly shapes decisions in education, media, and justice, we were inspired to create a tool that exposes hidden biases and gives users agency to challenge machine-generated narratives.
What it does
Unmask AI is a live bias-detection lab for LLMs. It lets users submit prompts, detect bias in responses, cross-examine the AI, reframe outputs from different perspectives, and override with human reasoning—all in one interactive session.
How we built it
We used FastAPI for the backend, PostgreSQL for structured storage, and OpenAI GPT-4.1-nano for core LLM interactions. Streamlit powers the UI, enabling a modular, step-by-step interface. WeasyPrint generates detailed PDF reports of each session.
Challenges we ran into
Handling nuanced bias detection and formatting structured insights was tricky. Integrating multiple LLM interactions into a seamless experience, while keeping performance smooth, was also challenging within 48 hours.
Accomplishments that we're proud of
We created a fully functioning system that not only critiques AI but empowers users to collaborate with it critically. The cross-exam and perspective modules are especially unique in making machine reasoning transparent.
What we learned
We deepened our understanding of LLM limitations, prompt engineering, and designing tools that prioritize human agency. We also learned how to build scalable, interactive AI apps under tight time constraints.
What's next for Unmask AI
We plan to integrate more LLMs, support multilingual bias detection, and explore use cases in journalism, education, and civic tech. Ultimately, we want to help communities audit AI that affects them.
Built With
- postgresql
- python
- streamlit
Log in or sign up for Devpost to join the conversation.