Background

AI systems and machine learning algorithms empower financial technology. But they can also behave unfairly for a number of reasons. One of the reasons being the societal biases that are often unconscious and are hard to define and detect, while other times it’s because of the intrinsic characteristics of the data and characteristics of the system. In other words, there exist algorithmic biases in many machine learning algorithms, where the system can extend or withhold opportunities, resources, or information, resulting in unfairness and inequality. This problem is amplified in the financial domain, where datasets contain many sensitive attributes that are involved in the development and deployment of machine learning systems.

What it does

With FairnessAI, we use a series of steps to detect the biases across many sensitive features in the model while still considering the model performance. Then we define a variety of fairness metrics that can help us better understand the variation of a model's behavior across the sensitive features.

How I built it

By incorporating the open source project Fairlearn, we have built a Streamlit app that creates a machine learning pipeline of uploading the data, selecting the model and sensitive features, training the model and running predictions, as well as evaluating the performance and fairness of the model across selected sensitive features.

Challenges I ran into

There is a lot of information for fairness evaluation and mitigation. They are algorithmic intensive. It was a challenge to understand all the definitions and algorithms and incorporate them into the FarinessAI app. With the time constraint I have yet to build all the functionalities I hoped to build.

What's next for FairnessAI

I will work on extending the functionality of the app, especially add mitigation algorithms and the ability to compare performance and fairness.

Built With

  • demographic-parity
  • equalized-odds
  • fairlearn
  • python
  • streamlit
Share this project:

Updates