Ethical considerations and regulatory challenges in data-driven finance and credit assessment
Synopsis
Data-driven decision making is changing how financial institutions understand their clients, manage risk, and create value. Offered first-party data has become a primary asset of assessment that companies acquire and possess. In the last ten years, credit assessment of loan applications has entered the era of automated scoring systems, relying primarily on raw transaction data acquired from banks and audio-visual data collected via mobile apps. Such data-driven scoring has vastly improved risk identification accuracy. However, competition edges have shifted towards controlling data, with the leading role taken by data intermediaries. A level operating field is needed by wide data access and collaborative scoring. Both general opacity and business secrets hinder algorithmic transparency. Such considerations challenge existing regulation. Mechanistic fragility gives rise to newly needed prudential regulation. By advancing a professional research agenda combining technical and ethical expertise, academics, professionals, and authorities collaborate to develop regulatory sandboxes and associated frameworks.
By identifying opportunities, harms, and ethical considerations in financial and credit assessment, a literature review yields a structured discussion on harms and their association with big data applications. Automated decision making enables processing second-hand data, which raises the incompleteness, inaccuracy, irrelevance, and illegality of data. This gives rise to homogeneous proxies excluding protected attributes, narrower decisions with restricted options, and greater risk of making biased decisions. It is proposed to arrange ethical principles and risks at a layer including injustice, discrimination, servitude, and humiliation. Dissent methods implicitly, through structured advocacy of agency and welfare, feed biased supervised learning processes. Data convergence allows low-hanging mind extraction while identical scores harm the company. In similar contexts, harm prioritization, risk interpretation, and mitigation measures are proposed. Detection methods with more lenient definitions of fairness in two competitive markets are proposed aside based on manipulated control.
Federal and national regulators are asked to preemptively identify potential harms of AI. This should cover all actors and harmonize definitions, aiming at white-box AI models with minimal parameterization. AI pipelines are to be audited from data privacy, semi-automated detection, and counterfactual testing. FinTechs may be qualified Trusted AI Providers, while extensive duties are demanded on all actors pipeline-wide.