top of page
Writer's pictureSohini Pattanayak

Interpreting Loan Predictions with TrustyAI

Part 2: A Developer’s Guide


In the previous blog, you should have gained a good overview of the use case of TrustyAI, and developed an understanding of the goal of our tutorial today. If not, you can go through the previous blog again - Read here


Let’s get started now!


Once you have your environment ready with your demo.py file open, we’ll first import all the necessary libraries for this tutorial -


Python
import numpy as np
from trustyai.model import Model
from trustyai.explainers import LimeExplainer

In the first three lines, we're importing necessary libraries:

  • numpy: A library in Python used for numerical computations. Here, it will help us create and manipulate arrays for our linear model.

  • Model: This class from TrustyAI wraps our linear model, allowing it to be used with various explainers.

  • LimeExplainer: The main attraction! LIME (Local Interpretable Model-Agnostic Explanations) is a technique to explain predictions of machine learning models.

Now, we'll define a set of weights for our linear model using the numpy function np.random.uniform(). These weights are randomly chosen between -5 and 5 for our five features. These weights determine the importance of each feature in the creditworthiness decision.

Python
weights = np.random.uniform(low=5, high=5, size=5)
print(f"WeightsforFeatures:{weights}")

We’ll build the Linear Model now, it’ll represent our predictive model. It will calculate the dot product between the input features x and the weights. This dot product gives a score, representing the creditworthiness of an applicant.


Python
def linear_model(x):
    return np.dot(x, weights)

It’s time to wrap our linear function, Using TrustyAI's Model class preparing it for explanation.

Python
model=Model(linear_model)

Let us create a random sample of data for an applicant. The data is an array of five random numbers (each representing a feature like annual income, number of open accounts, etc.). We then feed this data to our model to get a predicted_credit_score.

Python
applicant_data = np.random.rand(1,5)
predicted_credit_score = model(applicant_data)

Once this is done, the crucial part comes in. We initialize the LimeExplainer with specific parameters.



Python
lime_explainer = LimeExplainer(samples=1000, normalise_weights=False)
lime_explanation = lime_explainer.explain(
    inputs=applicant_data,
    outputs=predicted_credit_score,
    model=model)

We then use this explainer to explain our model's prediction on the applicant's data. The lime_explanation object holds the results.


And then we display the explanation -


Python
print(lime_explanation.as_dataframe())

Based on the predicted_credit_score, we provide a summary. If the score is positive, it indicates the applicant is likely to be approved, and vice versa.


And finally, we loop through our features and their respective weights, printing them out for clarity.

Python
print("Feature weighs:")
for feature, weight in zip(["Annual Income","Number of Open Accounts","Number of times Late Payment in the past","Debt-to-Income Ratio","Number of Credit Inquiries in the last 6 months"],
weights):
print(f"{feature}:{weight:.2f}")

And that is it! You can now find the complete code below!


Python
import numpy as np
from trustyai.model import Model
from trustyai.explainers import LimeExplainer

#Define weights for the linear model.

weights = np.random.uniform(low=-5, high=5, size=5)
print(f"Weights for Features:{weights}")

#Simple linear model

def linear_model(x):
return np.dot(x,weights)
model =Model(linear_model)

#Sample data for an applicant

applicant_data = np.random.rand(1,5)
predicted_credit_score = model(applicant_data)

lime_explainer = LimeExplainer(samples=1000,normalise_weights=False)
lime_explanation = lime_explainer.explain(
    inputs=application_data,
    outputs=predicted_credit_score,
    model=model
)

print(lime_explanation.as_dataframe())

#Interpretation

print("Summary of the explanation:")
if predicted_credit_score>0:
    print("The applicant is likely to be approved for a loan.")
else:
    print("The applicant is unlikely to be approved for a loan.")
    
#Display weights

print("Feature weighs:")
features= ["Annual Income", "Number of Open Accounts", "Number of times Late Payment in the past", "Debt-to-Income Ratio", "Number of Credit Inquiries in the last 6 months"]

for feature, weight in zip(features, weights):
    print(f"{feature}:{weight:.2f}")

Interpretation of the Output:

Running the code gives us the following output:





These weights are influential in shaping the model's decision. For instance, the "Annual Income" has a weight of -2.56, suggesting that an increase in the annual income might negatively impact the creditworthiness in this model – a rather unexpected observation, highlighting an area Jane might want to reassess.


Additionally, with the help of the LimeExplainer, we obtain the saliency of each feature. A higher absolute value of saliency indicates a stronger influence of that feature on the decision.


Conclusion


Through TrustyAI, Jane not only developed a predictive model but also successfully interpreted its decisions, ensuring compliance with financial regulations. This tutorial underscores the importance of interpretability in machine learning models and showcases how developers can harness TrustyAI to bring transparency to their solutions.


Developers keen on adopting TrustyAI should consider its vast range of capabilities that go beyond LIME, offering a comprehensive suite of tools to make AI/ML models trustworthy. As data-driven decisions become ubiquitous, tools like TrustyAI will become indispensable, ensuring a balance between model accuracy and transparency.




12 views0 comments

Recent Posts

See All

Comments


bottom of page