We are creating a post-processing algorithm for predictive models that will reduce their implicit biases. To do this, we are implementing an algorithm known as multi-calibration which has never been implemented up until this point. This algorithm is an extension of multi-accuracy, it adds the condition to a model that it must be equally accurate across all of subgroups (i.e., demographics). Multi-calibration, meanwhile, goes a step further and requires each subgroup to have equal variances so that we have the same degree of confidence from a predictor no matter how under-represented an input was during training.
The algorithm itself consists primarily of two parts. The first is the auditor, which identifies inputs from training that are not performing well. The auditor then feeds those inputs into the projected gradient descent, which corrects these points towards their true mean and true variance. The second part is prediction. The necessary changes that need to be made on each subgroup are recorded, so when we predict any new data, we apply the relevant changes to deduce an adjusted prediction.
In addition to implementation, we have been taking careful steps to ensure that we are approaching the concept of algorithmic fairness as ethically as possible. When we first started, a key motivating instance of use for this algorithm was in criminal justice, namely applying this to the COMPAS algorithm, to ensure bail is posted fairly. However after speaking with Professor Coglianese at the Penn Law school, we realized that the reliance of a black-boxed algorithm at all in criminal justice is counter productive, and elected instead to focus on healthcare and housing as primary applications. We have also investigated business applications for this algorithm, the most promising of which would be to ultimately sell our post-processing algorithm to IBM where it can be implemented into their open source Fairness 360 package. This has the added benefit of making this tool widely available to all data scientists, aiding in the completion of our ultimate goal, which is, fundamentally, to make models more fair.
To watch our full demo, please click on this link: https://www.youtube.com/watch?v=XQEd5H83PaE