Artificial intelligence governance in finance: Ethics, bias, security, and regulatory compliance in artificial intelligence systems

Authors

Jeevani Singireddy
Intuit Inc, California, United States

Synopsis

Artificial intelligence (AI) systems are emerging as sophisticated upright indisputable arbitrators in finance. From automated trading systems, fraud detection, anti-money laundering, credit risk models to chatbots, increasing expenditures on AI systems are expected, propelled by an acute shortage of data scientists and algorithm developers. The allure of big data and the ability of AI systems to extract complex patterns from data are alluring. Ensemble methods such as random forests are prominently used in credit risk and fraud detection. In automated trading, AI methods using methods such as recurrent neural networks are proliferating, leveraging wealth of market data. Applications of AI systems inherently affect human welfare: accessibility to loans, trust in the stability of payment systems, safety from financial fraud. Outcomes determined by AI systems are being questioned: do algorithms freely pour loans to the applicant, or do they drive the rate of suggested purchase? Problems in AI systems arise due to lack of observability, explainability, documentation, and scrutiny.

Risks appear in multiple dimensions: ethical, legal, financial, and reputational. Where humans are biased, models learn and systematize ingrained biases and discriminate on grounds not exposed to scrutiny. Mocked by an algorithm, decisions are defenseless against the unearthly arbitrariness of a governing machine component when it comes to explainability. This is criminal, given the abstractness of decisions, unlike when a human denies a loan because of ‘lack of proof of income’, and can substantiate ‘the truth of the matters’ (e.g., risk of faking income and a last resort, fragility to shock). Risks commingle and reinforce each other, often at a multi-layer scale (macro/micro). There is no uniform definition of ‘fairness’, ‘explainability’, ‘interpretation’, or ‘objective’ across institutions, companies, affluences, classes, and societies across countries. Tensions between faculties of law, regulation, audit, data governance, testing, and codes of conduct are which don’t talk to each other, as are experts on interpretability, testing, maintenance, compliance, and so on. With models trained on biased data, black-box ML methods undeserving of scrutiny, ownership, and risk proliferate. On the other end, with the fragmentation of governments lacking unified assessments, testing periods, and uniformity of evaluations, exams and scrutatoires fail to apprehend ground truth on queered a priori evidences.

Downloads

Forthcoming

26 April 2025

How to Cite

Singireddy, J. . (2025). Artificial intelligence governance in finance: Ethics, bias, security, and regulatory compliance in artificial intelligence systems . In Smart Finance: Harnessing Artificial Intelligence to Transform Tax, Accounting, Payroll, and Credit Management for the Digital Age (pp. 107-119). Deep Science Publishing. https://doi.org/10.70593/978-93-49910-40-9_8