DEV Community

Cover image for Interpreting Loan Predictions with TrustyAI: Part 2
Sohini Pattanayak for TrustyCore

Posted on • Edited on • Originally published at dev.to

Interpreting Loan Predictions with TrustyAI: Part 2

A Developer’s Guide

In the previous blog, we gained a good overview of the use case of TrustyAI and developed an understanding of the goal of our tutorial today. If not, you can go through the previous blog again - Part 1: An Overview

Let’s get started now! 🚀

Once we have our environment ready with our demo.py file open, we’ll first import all the necessary libraries for this tutorial -

import numpy as np
from trustyai.model import Model
from trustyai.explainers import LimeExplainer
Enter fullscreen mode Exit fullscreen mode

In the first three lines, we're importing the necessary libraries:

  • numpy: A library in Python used for numerical computations. Here, it will help us create and manipulate arrays for our linear model.

  • Model: This class from TrustyAI wraps our linear model, allowing it to be used with various explainers. TrustyAI library supports any type of model, it is enough to specify the predict function to invoke.

  • LimeExplainer: The main attraction! LIME (Local Interpretable Model-Agnostic Explanations) is a technique to explain predictions of machine learning models.

You can learn more about LIME from here - How does the LIME Method for Explainable AI work?

Now, we'll define a set of weights for our linear model using the numpy function np.random.uniform(). These weights are randomly chosen between -5 and 5 for our five features. These weights determine the importance of each feature in the creditworthiness decision.

weights = np.random.uniform(low=-5, high=5, size=5)
print(f"Weights for Features: {weights}")
Enter fullscreen mode Exit fullscreen mode

We’ll build the Linear Model now, it’ll represent our predictive model. It will calculate the dot product between the input features x and the weights. This dot product gives a score, representing the creditworthiness of an applicant.

def linear_model(x):
    return np.dot(x, weights)
Enter fullscreen mode Exit fullscreen mode

It’s time to wrap up our linear function, Using TrustyAI's Model class preparing it for explanation.

model = Model(linear_model)
Enter fullscreen mode Exit fullscreen mode

Let us create a random sample of data for an applicant. The data is an array of five random numbers (each representing a feature like annual income, number of open accounts, etc.). We then feed this data to our model to get predicted_credit_score.

applicant_data = np.random.rand(1, 5)
predicted_credit_score = model(applicant_data)
Enter fullscreen mode Exit fullscreen mode

Once this is done, the crucial part comes in. We initialize the LimeExplainer with specific parameters.

lime_explainer = LimeExplainer(samples=1000, normalise_weights=False)
lime_explanation = lime_explainer.explain(
    inputs=applicant_data,
    outputs=predicted_credit_score,
    model=model)
Enter fullscreen mode Exit fullscreen mode

We then use this explainer to explain our model's prediction on the applicant's data. The lime_explanation object holds the results.

And then we display the explanation -

print(lime_explanation.as_dataframe())
Enter fullscreen mode Exit fullscreen mode

Based on the predicted_credit_score, we provide a summary. If the score is positive, it indicates the applicant is likely to be approved, and vice versa.

And finally, we loop through our features and their respective weights, printing them out for clarity.

print("Feature weights:")
for feature, weight in zip(["Annual Income", "Number of Open Accounts", "Number of times Late Payment in the past", "Debt-to-Income Ratio", "Number of Credit Inquiries in the last 6 months"], weights):
  print(f"{feature}: {weight:.2f}")
Enter fullscreen mode Exit fullscreen mode

And that is it! You can now find the complete code below!

import numpy as np
from trustyai.model import Model
from trustyai.explainers import LimeExplainer

# Define weights for the linear model.

weights = np.random.uniform(low=-5, high=5, size=5)
print(f"Weights for Features: {weights}")

# Simple linear model

def linear_model(x):
    return np.dot(x, weights)

model = Model(linear_model)

# Sample data for an applicant

applicant_data = np.random.rand(1, 5)
predicted_credit_score = model(applicant_data)

lime_explainer = LimeExplainer(samples=1000, normalise_weights=False)
lime_explanation = lime_explainer.explain(
    inputs=applicant_data,
    outputs=predicted_credit_score,
    model=model)

print(lime_explanation.as_dataframe())

# Interpretation

print("Summary of the explanation:")
if predicted_credit_score > 0:
  print("The applicant is likely to be approved for a loan.")
else:
  print("The applicant is unlikely to be approved for a loan.")

# Display weights

print("Feature weights:")
features = ["Annual Income", "Number of Open Accounts", "Number of times Late Payment in the past", "Debt-to-Income Ratio", "Number of Credit Inquiries in the last 6 months"]
for feature, weight in zip(features, weights):
  print(f"{feature}: {weight:.2f}")
Enter fullscreen mode Exit fullscreen mode

Interpretation of the Output:

Running the code gives us the following output:

Image description

These weights are influential in shaping the model's decision. For instance, the "Annual Income" has a weight of -2.56, suggesting that an increase in the annual income might negatively impact the creditworthiness in this model – a rather unexpected observation, highlighting an area Jane might want to reassess.

Additionally, with the help of the LimeExplainer, we obtain the saliency of each feature. A higher absolute value of saliency indicates a stronger influence of that feature on the decision.

Conclusion:

Through TrustyAI, Jane not only developed a predictive model but also successfully interpreted its decisions, ensuring compliance with financial regulations. This tutorial underscores the importance of interpretability in machine learning models and showcases how developers can harness TrustyAI to bring transparency to their solutions.

Developers keen on adopting TrustyAI should consider its vast range of capabilities that go beyond LIME, offering a comprehensive suite of tools to make AI/ML models trustworthy. As data-driven decisions become ubiquitous, tools like TrustyAI will become indispensable, ensuring a balance between model accuracy and transparency.

Like the blog? Do hit a Like, send me some unicorns, and don't forget to share it with your friends!

Top comments (0)