Ensuring data governance, lineage, explainability, and auditability in artificial intelligence-driven financial models
Synopsis
Advances in AI have offered unprecedented capability for tackling complex decision environments that the financial services industry faces today. Breakthrough achievements in deep learning and natural language processing, among other areas, are enabling new solutions that could not previously have been imagined, let alone attempted. The impetus to grow innovative product and service offerings based on complex AI-driven models, however, underscores an equally pressing need to ensure regulatory safety and soundness, risk and control management, and, above all, the trust of stakeholders. And while traditional statistical and empirical methods lack sufficient predictive power for the complexity and volume of financial services activity, customers, investors, and regulators may not necessarily trust new AI-driven models as much as traditional credit scoring models. For them, there is a huge risk in failing to disclose decision-making processes in fintech platforms that use machine learning to assist in decision-making for areas such as algorithmic trading, credit-risk assessment, and fraud detection (Barredo Arrieta et al., 2020; Sicular & Beyer, 2020; Jain & Aggarwal, 2021).
The onus is on organizations using AI-driven models to approach product and service innovation with a commensurate level of rigour and caution. Data-driven decision-making in financial services is a two-sided coin. At one end, financial services-related academics, managers and practitioners have called for deeper integration of AI-driven models into existing competency domains such as model validation, model risk management, investment strategy, etc. On the other end, regulators have voiced concern that tech companies are, or soon will be, offering financial services activities without sufficient controls or expert regulatory oversight to safeguard investor trust. This paper aims to offer a unique contribution on the second, skeptical, side of the discussion coin. In doing so, it provides a much-needed foundation for the other, pro-innovation side, to vet the feasibility of such models for various fintech and financial market applications (Veale & Edwards, 2018; Weller, 2019).