Inspiration PrediCore AI was inspired by the pressing need to address bias and inequities in the criminal justice system. Tools like the COMPAS algorithm, while widely used, often perpetuate systemic biases, leading to unfair outcomes. Our team wanted to create a platform that not only predicts recidivism but also prioritizes fairness and transparency in its design, offering ethical and accurate insights to improve decision-making.

What it does PrediCore AI is an advanced platform that predicts three critical aspects of recidivism:

The likelihood of reoffense, The severity of the crime, and The time until recidivism occurs. What makes PrediCore AI unique is its focus on fairness metrics, ensuring predictions are unbiased across demographic groups. By integrating these ethical safeguards, the platform promotes equity while delivering actionable insights.

How we built it Machine Learning Models: PrediCore AI employs models like XGBoost and Random Forest for baseline predictions and an adversarial neural network to actively reduce bias in its outcomes. Backend Architecture: Built with Flask, the backend processes user inputs, manages predictions, and ensures seamless data flow. Frontend Interface: Designed with accessibility in mind, the platform features a clean interface that presents results, data visualizations, and downloadable reports. Fairness Evaluation Metrics: Custom algorithms were implemented to assess metrics such as equalized odds and residual error reduction, ensuring fairness in all predictions. Challenges we ran into Bias Mitigation: Training adversarial models to balance predictive accuracy with fairness was technically challenging and required extensive fine-tuning. Data Limitations: Ensuring datasets were representative and preprocessed effectively to avoid introducing further biases. Integrating Fairness Metrics: Making the fairness evaluations both sophisticated and user-friendly required balancing technical depth with practical usability. Accomplishments that we're proud of Successfully implementing a fairness-focused AI system that integrates adversarial debiasing while maintaining high accuracy. Building an intuitive interface that makes the platform accessible to stakeholders in criminal justice, regardless of technical expertise. Creating a system that not only predicts recidivism but also addresses ethical concerns head-on, fostering trust in AI decision-making.

What we learned The importance of fairness metrics in sensitive AI applications and the technical complexity of integrating these metrics effectively. The need to consider ethical implications at every stage of development, ensuring technology benefits all communities equally. How to build a platform that bridges the gap between technical robustness and practical usability.

What’s next for PrediCore AI Looking forward, we plan to: Expand PrediCore AI to include predictions for rehabilitation success and other criminal justice outcomes. Refine the adversarial debiasing model to further enhance its accuracy and scalability. Partner with stakeholders to test the platform in real-world environments, collecting feedback for continuous improvement. Explore applications of fairness-focused AI in other areas, such as education and healthcare.

Built With

Share this project:

Updates