Inspiration

Loyalty programs are to reward repeat customers. A bias is an illogical preference or prejudice, and loyalty management solutions are also biased. Having recognized the bias in reward systems, we decided to build FinLoyal

What it does

FINLoyal is an ML-powered loyalty management solution that can offer loyalty points and rewards to customers, eliminating all systemic biases, ensuring equality to all customers. We do data bias and model bias detection and mitigation in FINLoyal. To detect biases, Regression algorithms, FairML, Lime, and IBM AI Fairness 360 can be used. Feature importance is computed by these models. The high and low scores of each feature as computed by the different models are then compared. The feature can be considered relatively unbiased if the range of feature importance score across different models is within acceptable limits. The feature importance scores can then be normalized based on the least unbiased feature. Normalization exercise is an iterative process, that will mitigate the bias eventually resulting in an optimized model, which can be then deployed, and feedback can be implemented.

How we built it

To detect and mitigate bias in loyalty points, we did

  1. Data Preparation
  2. Leverage from what you already know / Supervised Learning
  3. Detect data biases and mitigate biases
  4. Create baseline model
  5. Model bias detection and removal
  6. Create an optimized model
  7. Deploy and incorporate a feedback loop

Challenges we ran into

  1. Non-availability of historic data for training ML models. Hence we have used data from open-source platform Kaggle
  2. Understanding the various models used to mitigate biases and incorporating the same in our model

    Accomplishments that we're proud of

  3. Collaborating and networking with the team established as part of FinIndia AI/ML special interest group [SIG] with members from across sites Finastra Bangalore, Finastra Trivandrum and Finastra Data Team

  4. Integrating with FFDC API

  5. Leveraging the expertise of all the team members of different levels/years of experience across different locations

What we learned

We learned that extensive research is underway worldwide to tack algorithmic bias, and there is no one size fit all approach. Tackling algorithmic bias requires sound knowledge of various algorithms that can be used during pre-processing and post-processing; and building the right model with the least bias is key. This involves an iterative process, which cannot be accomplished in a shorter span; and continuous learning and improvement is the key.

What next for FINLoyal

We plan to fine-tune the model, compare the feature importance by multiple models, and finalize on the normalization technique that is giving the best results.

Built With

Share this project:

Updates