Building trust in automated systems through transparent credit risk evaluation models

Authors

Murali Malempati
Mastercard International INC, O'Fallon, USA

Synopsis

An algorithmic culture has emerged in which decisions affecting our daily life increasingly depend on automated systems such as machine learning. Developers of those systems strive for more accuracy, while at the same time demands for accountability increase. In many real-life applications, black box systems operate across domains like finance, health care, transportation, and criminal justice, often leaning towards more complex and less transparent machine learning (ML) models. Stakeholders responsible for an automating decision that can dramatically influence citizens' lives and regulate the operations of automated systems require information on why values are assigned and details of how the modelling decisions were made to build trust. Financial institutions are at the forefront of this trend and are slowly but creatively adopting these technologies to perform fundamentals such as financial audits, risk assessment, fraud detection, and customer scoring.

Credit assessments are necessary for financial institutions; they are essential in determining whether a financing request should be accepted or rejected. Generally, this task is done by risk experts who analyze data referring to loan applicants and manually produce credit risk reports. In practice, credit assessments can be easier and less prone to human error if they are automated with machine learning techniques. A primary objective throughout the process is building models that can estimate the probability of default of the applicant company, as well as highlighting which characteristics are responsible for this evaluation. Some of the models are black box systems that provide only a single value as output (the probability of a default), and thus no additional information about the data are provided. Therefore, it is hard for a risk analyst to rely on the output estimated.

Recently, the transparency and interpretability of machine learning models has attracted increasing attention due to both societal and regulatory pressure across many domains. Among them, financial technology, and therefore credit risk scoring, has relevant implications on the economy as a whole and on people’s lives. Financial institutions are subject to rigorous guidelines and regulations when addressing credit risk scoring. Transparency of the models is a crucial issue in that context. Despite rigorous requirements for interpretability and explainability, recent advancements in automated credit risk scoring tend to rely on black box algorithms. Consequently, the need arises for more transparent machine learning techniques with the ability to shed light on the credit risk score.

Downloads

Forthcoming

26 April 2025

How to Cite

Malempati, M. . (2025). Building trust in automated systems through transparent credit risk evaluation models . In The Intelligent Ledger: Harnessing Artificial Intelligence, Big Data, and Cloud Power to Revolutionize Finance, Credit, and Security (pp. 149-162). Deep Science Publishing. https://doi.org/10.70593/978-93-49910-16-4_11