The Linux Foundation Projects
Skip to main content

Discover LF AI & Data Projects with TAC Talks Watch Now

LF AI & Data Blog

How Explainable AI is Changing the Bank and Finance Industry

By March 25, 2021No Comments

Guest Author: Dr. Jagreet Kaur, Chief AI Officer, Xenonstack

Boost Banks Performance using XAI 

Machine Learning has automated business operations and makes them more efficient, improves services, and enriches customer interaction. But it is noticed that the AI systems are biased and discriminate while providing services based on gender, race, or ethnicity. As most advanced ML algorithms have opaque functioning, noticing biases and tracking model decisions is tough. Thus these systems lose a customer as well as banker’s trust. This issue is known as the black-box problem. 

Haunting Fraud: The use of AI and ML for haunting fraud helps to automate the task and detect fraud. But some of the cases are coming, that system misidentifies the customer and accidentally declined credit cards. Thus it disappoints the customer and loses their trust, which also has a reputational impact, and customers just stop using their services. Because the developer and bankers cannot spot whether the system works properly or not? And what is the reason that system declines a card? These mishappenings have occurred due to a lack of transparency in the system. 

Explainable AI can solve these problems by providing transparency and giving answers: 

  • How does the system decide that card should be declined? 
  • What is the reason behind individual approval or decline of a customer’s card? 

Banks and Financial institutions are investing in Explainable AI for solving these problems. We build the AI system using Explainable AI to make models transparent. Explainable AI makes model decisions more trustable. It also solves the issue of bias. 

Before/After: Before adopting Explainable AI, users can take output but do not know how it happened. But the use of Explainable AI essentially builds trust in the algorithm and helps to explain the system, so no one can say I don’t know what happened. 

Implementation: Visualization interprets the model and explains it. Various libraries and packages explain the model decision process, such as how the software reaches its conclusion. There are two dimensions of an interpretable system: 

  • Transparency helps to solve the black box model problem: It provides clarity; how does the model work?
  • Explainability helps organizations rationalize and understand AI decisions: “why did the model do that?”

Case Study to understand Explainable AI in Banks 

Banking industries have started automating their loan system using AI (Artificial Intelligence) that makes a decision or avail loan within a minute using customer’s data to predict their creditworthiness. It can decrease overdue loans, reduce credit loss and risk and decrease frauds. 

There is some cost associated with the incorrect decision of the model. Most of the models used for AI systems are black box in nature, which increases the business risk. Understanding model decisions is challenging due to a lack of transparency. 

The end customer can ask questions about the model that the developer could not answer due to opaque models; thus, it would not build the customer trust. 

Explainable AI in Loan Approval System 

Explainable AI builds customer trust by providing a transparent and clear methodology of the model. Explainable AI uses various frameworks and libraries to answer customers’ questions. Such as:

  • How is data contributing to making a decision? 
  • Which feature influences the result more? 
  • How changing the value of that particular feature affects the system output?
  • Why did the system decline the loan application of Mr. Jain? 
  • What is the income required to have for approving a loan? 
  • How do models make decisions? 

To make the model interpretable, we will divide our approach into three levels. And various questions picked from these. 

  • Global Explanation 
  • Local Explanation 
  • Feature interaction and distribution 

Some of these questions and methodologies to be used to answer those questions:

Questions of Stakeholder 

Methodology to be used 

Implementation 

Process

Is it possible to enhance model explainability without damaging model performance?

Model accuracy vs. Model Explainability

Python and Visualization

How is data contributing to making a decision?

SHAP(SHapley Additive exPlanations)

Using SHAP library

How does model output vary by changing the Income of the borrower?

PDP (Partial Dependence Plot)/ICE(Individual Conditional Expectation)

PDP box

Why did the system decline the loan application of Mr. Jain?

LIME 

LIME library

What is the income required to have for approving a loan?

Anchors 

Anchors from Alibi

How do models make decisions? 

defragTrees(For random forest)

defragTree Package

Table 1.1 

Global Level Explanation 

Question 1: How is data contributing to making a decision?

According to the model, ‘Credit history’, ‘Loan amount’ and ‘Total Income’ are the top three variables with the most impact on the application’s approval. 

The contribution of features in making decisions can help the customer trust the model. If correct parameters influence the results, it means the model works correctly. 

Figure 1.1 depicts the importance of the features in predicting the output. Features are sorted from top to bottom to decrease its weightage to make decisions.

Figure 1.1 

The probability of approval or rejection of the loan application depends on the person’s credit history. 

Q2: How is data contributing to making a decision? 

It is the next version of the previous graph and gives more insight into the model. It also shows the same things with more information about the feature’s value.  

  • Feature importance: Variables ranked in descending order of importance.
  • Impact: The horizontal location shows whether the effect of that value is associated with a higher or lower prediction.
  • Value: Color shows whether that variable is high or low for that observation. Red color devotes the high value and blue for less value. The variation in color of the dot shows the value of the feature. 
  • Correlation: The first parameter of Figure 1.2 depicts that the approval of application highly depends on credit history. Having a good credit history has more chances of approving a loan application. 

Figure 1.2 

Feature interaction and distribution 

Q3: How does model output vary by changing the borrower’s income? 

After getting the answer to the first question, the customer can ask how the change in Income changes the system output when other parameters are not changing? 

To answer this, let’s discuss the Partial Dependence Plot (PDP). PDP shows the relation between the model output and feature value where other features are marginalized. This graph shows how changing Income changes the system decision. 

Figure 1.3 

As we get an idea of the feature effect on the model decision, now we can go for a Local explanation to understand the prediction for an individual customer. 

Local Explanation 

Q4: Why did the system decline the loan application of Mr. Jain? 

Mr. Jain has applied for the loan. But the system rejects his application; now he wants to know why the system rejected his application. Using SHAP, the system justifies its result. The SHAP value represents the impact of feature evidence on the model’s output. 

Because Mr. Jain has a poor credit history, he has not repaid previous debt, and he also doesn’t have his income, the income of co-application is also low. These all factors move the system’s decision towards declining the application.

Figure 1.4 Mr. Jain’s justification 

Q5: Mr. John and Mr. Herry have almost the same parameters values, such as total income and credit history; then why did the system decline Mr. Herry’s application and approve Mr. John’s application? 

Both Mr. John and Mr. Herry have the same values for the attributes, but the AI system approves the loan application of Mr. John but not of Mr. Herry. 

To answer this question, Explainable AI uses a waterfall chart of SHAP. Let’s compare the justification for both Mr. Herry and Mr. John; it noticed that both have good credit history and values for other parameters except Income. Mr. Herry has a low salary compared to Mr. John, and thus the total income of Mr. John also decreased. That’s why the system decides that Mr. John will not return the loan, therefore, reject his application.

Figure 1.5 Mr. John’s justification 

Figure 1.6 Mr. Herry’s justification

How Explainable AI improves Bank AI systems? 

Explainable AI improves AI systems that banks are using: 

  • Build trust by providing greater visibility to spot flaws and unknown vulnerabilities. Thus assure that system operation. 
  • Improve performance by understanding how the model works and make decisions. 
  • It improves strategy and decision making as a result, also improves revenue, customer behavior, and employee turnover. 
  • Enhance control over the system. 
  • Identify mistakes and quickly work on them. 

Business Benefit of Explainable AI 

Business benefits of Explainable AI as shown in Figure 1.1: 

Figure 1.1 

Optimize

  • Model Performance: Improves and optimizes AI systems by understanding the how and why of the systems while making decisions. It verifies system outputs and enhances them by detecting bias and flaws. 
  • Decision Making: Predicting customer churn is a widespread use case of ML that can tell that customer churn rate will increase. Now, suppose to reduce the churn rate, the financial institution reduces their fee, but the exact reason for increasing churn rate can be customer service experience. Fee reduction cannot solve the problem because the main reason behind the scene is customer interaction, not the fee. Therefore to know the correct reason, Explainable AI must understand why the churn rate is increasing. 

Retain 

  • Control: It helps to retain control over AI. Visibility of AI models data and features helps identify issues(such as drift) and solve them. 
  • Safety: It tracks unethical design and works with the cyber team to safeguard against these faults. 

Maintain 

  • Ethics: With clear governance and security guards, it provides ethical consideration in their AI systems. 
  • Trust: Ensure that the algorithms make a correct decision using Explainable AI. It builds trust by strengthening the stability and predictability of interpretable models. 

Comply 

  • Accountability: For a clear understanding of an AI system’s accountability, it is essential to understand how the model operates and evolves that can be provided by only Explainable AI in the case of black-box models. 
  • Regulation: Focuses on AI areas by establishing standards for governance, accuracy, transparency, and explainability. 

Conclusion

Contribution of the Explainable AI in Loan approval AI system makes it easy for the end-user to understand the AI systems’ complex working. It provides a human-centered interface to the user. Explainability is a key to producing a transparent, proficient, and accurate AI system that can help the bankers and the borrower understand and use it.

LF AI & Data Resources

Author

  • Andrew Bringaze

    Andrew Bringaze is the senior developer for The Linux Foundation. With over 10 years of experience his focus is on open source code, WordPress, React, and site security.

    View all posts