Inspiration The inspiration for this project stems from the pressing need to address racial biases in the criminal justice system. Data-driven decisions often perpetuate systemic inequities when machine learning models are trained on biased datasets. By creating a predictive tool that integrates fairness measures, we aim to ensure that decisions involving recidivism and violent crime risks are more equitable, transparent, and just.

What it does This project predicts key aspects of recidivism, including the likelihood of reoffense, the severity of the crime, and the time until recidivism. It provides actionable insights using advanced machine learning models like XGBoost, Random Forest, and adversarial neural networks. The tool also focuses on fairness by ensuring that predictions are debiased across racial groups, promoting ethical decision-making in criminal justice.

How we built it We built the project using a combination of machine learning models and Flask for the web application interface. The pipeline includes XGBoost and Random Forest for base predictions and a meta-classifier/adversarial neural network for integrating results. The frontend is styled for accessibility, with clear data visualizations, prediction cards, and PDF reporting capabilities. The backend includes fairness evaluation metrics, such as equalized odds and residual error analysis, to ensure unbiased outcomes.

Challenges we ran into One of the biggest challenges was addressing the inherent biases in the dataset and ensuring that the models produced equitable results. Training adversarial neural networks to mitigate racial bias while maintaining high predictive accuracy required fine-tuning. We also encountered technical hurdles like ensuring consistent preprocessing between training and prediction datasets and integrating complex fairness metrics into the pipeline.

Accomplishments that we're proud of We are proud of developing a fully functional predictive pipeline that not only provides accurate predictions but also prioritizes fairness and transparency. The integration of adversarial debiasing models into the recidivism prediction process was a significant achievement. Additionally, building an intuitive, user-friendly web interface with features like PDF reports and dynamic data visualizations adds to the project's accessibility and impact.

What we learned Through this project, we learned how to implement fairness-focused machine learning techniques effectively. We gained insights into the challenges of mitigating bias in real-world datasets and the importance of designing systems that are both technically robust and ethically sound. This experience also deepened our understanding of how predictive models can influence societal decisions.

What's next for Fighting against racial bias in the criminal justice system Moving forward, we aim to expand the project by refining the adversarial debiasing model for greater scalability and accuracy. We plan to include additional fairness metrics and allow real-time feedback to enhance transparency further. The next step is to partner with stakeholders in the criminal justice system to pilot the tool in real-world settings. Ultimately, our goal is to create a transformative solution that advocates for equity and fairness in criminal justice.

Built With

Share this project:

Updates