Inspiration

With regulations like the EU AI Act on the rise, we saw a clear gap: developers and companies are racing to deploy AI but lack tools to ensure those models are ethical, fair, and legally compliant. We were inspired to create FairSight, an AI-powered Copilot that helps audit and fix models before harm happens — making responsible AI development accessible to everyone.

What it does

FairSight scans machine learning models to detect:

  • Bias in predictions and datasets
  • Fairness violations (e.g., demographic parity)
  • Explainability issues
  • Regulatory risks mapped to global AI laws

It then generates an easy-to-understand report, along with AI-generated mitigation suggestions to help developers improve their models.

How we built it

  • Frontend: React dashboard for uploading models and reviewing audits
  • Backend: Python (FastAPI) with integrated tools like Fairlearn, SHAP, Aequitas
  • LLM Integration: GPT-4 generates plain-language reports and improvement tips
  • CLI + GitHub Actions: For seamless integration into development workflows

Challenges we ran into

  • Translating legal language into technical checks that AI models could actually be tested for
  • Ensuring audit tools worked across different model types and data formats
  • Balancing accuracy with interpretability — developers need both insight and clarity

Accomplishments that we're proud of

  • Built a working prototype that runs bias + fairness checks in under a minute
  • Successfully generated readable audit reports using GPT
  • Integrated compliance feedback into a live GitHub workflow
  • Created a tool that can genuinely help teams build better, safer AI

What we learned

  • Real-world AI compliance is messy, nuanced, and necessary
  • Tools like SHAP and Fairlearn are powerful but must be wrapped in usable UX
  • AI ethics is not just a philosophy problem — it’s an engineering challenge

What's next for FairSight

  • Add support for image and LLM audits
  • Build regulation profiles (e.g., “GDPR Mode”, “EU AI Act Mode”)
  • Partner with dev tool platforms (like Hugging Face or GitHub)
  • Launch a beta program for startups needing AI risk assessments before deployment

Built With

Share this project:

Updates