Trust and transparency: Explainability in artificial intelligence for financial advisory
Synopsis
The rapid development of AI models enormously improves efficiency, automates labor-intensive processes, and augments the human decision-making process, also outside finance. However, AI-based prediction and risk assessment models are predominantly accepted as providing better accuracy, but cause worries regarding trust and transparency. Many AI solutions involve complex black-box algorithms that are designed to be very flexible and able to take many inputs and learn complex relationships therein. At the same time, the use of opaque or black-box models significantly hampers their acceptance on a wider scale even though there are strong reasons that these models are able to provide better predictions than white-box models. The case of black-box models in finance is a classic example of the problems that arise from a lack of trust and transparency. Particularly in financial business decisions, players are asked to trust the algorithmic output sometimes given the figurative algorithm viewed as a black-box. Indeed, concerning AI model-based credit scoring, the assessment of an individual should not only entail the computation of the risk score using the artificial intelligence (AI) model but also an explanation of why this individual is regarded as a risk. Failure towards this end may harm not only the individual requesting a loan or insurance but also the reputation of the corporation providing such services. In summary, explainability and transparency of AI-based risk demand models is highly relevant both for fair decision (as imposing the likelihood of a risk on an individual) is just as important as correct applications (the likelihood of risk indeed approximates the factual events).
This can be challenging since both the dimensionality of the big data involved and the number of variables involved are growing rapidly. Awareness of the black box is particularly high in the banking sector, which is based on knowledge and trust in clients dealing with long-term dependencies. In this sector, AI tools can only be adopted if they provide sufficient transparency. Even though each individual decision is based on its own automated, recurrent, risk-adjusted AI strategy, ‘why’ questions arise by users or regulators within the decision processes: Why did the model signal presenting a warning for a specific transaction’s $1 million trade? Explaining this means demonstrating which body of rules the AI system had to take into account in carrying out the automated assessment.