1. Trends and policy frameworks for AI in finance

Artificial Intelligence (AI) is a key set of technologies powering digital transformation with tremendous potential to improve productivity and innovation. AI systems are being deployed rapidly in the financial sector.

AI in the financial sector can help improve customer experiences, rapidly identify investment opportunities and possibly grant more credit at better conditions. Alongside these benefits for firms, customers and societies, AI can create new risks, or reinforce existing risks. These risks include entrenching bias; lack of explainability of financial decisions affecting an individual’s well-being; introducing new forms of cyber-attacks; and automating jobs ahead of society adjusting to the changes. The myriad uses of AI technology calls for balanced policy approaches that can support AI development and adoption while mitigating risks.

AI differs from other technologies by the fact that it can “perceive and interact with its environment” and do so with “varying degrees of autonomy” (OECD, 2019[1]). Taking these distinctive features into account, this chapter provides an introduction to AI in finance and proposes three different approaches to frame the public policy debate so that businesses, institutions and societies can reap the benefits of AI.

AI is a general-purpose technology that is seeing rapid uptake in many industrial sectors, including transport, agriculture, marketing and advertising, healthcare, as well as finance and insurance. At the same time, digital technologies have enabled the tracking and monitoring of AI trends and developments across sectors in almost real time. The “Trends and data” pillar of the OECD.AI Policy Observatory provides a collection of timely indicators that can illustrate the uptake of AI technologies in different sectors, including business and finance. As illustrated in Figure 1.1, AI research publications in the financial sector increased dramatically after the year 2000, stabilising over the period 2015-2018 and booming again since 2019. It is led by the United States, the European Union and China.

Data on the supply and demand for AI skills can illustrate national industrial profiles, inform a country’s digital strategy, and uncover educational and labour policy priorities. For instance, the supply of AI skills in a particular country and sector could be proxied by self-declared skills in LinkedIn profiles. On average, a higher proportion of people working in the financial sector in India, the United States and Canada declare being equipped with AI skills (Figure 1.2).

Financial and AI skills coexist in different domains. Digital security is an example. Given the sensitivity of financial and insurance-related data – including personally identifiable information and health data – digital security competencies are in high demand in the finance and insurance sector. Analysis of digital security job postings for all sectors in 16 countries show the top competencies that companies are looking for in this area. Some of these competencies – including encryption, cryptocurrency and blockchain – have commonly been associated with the development of FinTech solutions and financial innovation. Others – such as algorithms, programming languages, swarm intelligence and fuzzy sets1 – are related to AI. Together with competencies such as audit and regulatory compliance, digital security job postings reflect the increasingly important role played by AI technologies – particularly natural language processing – to help verify transactions, codify compliance rules and decrease banks’ legal compliance costs. Digital security jobs illustrate that AI and finance-related skills coexist in the job market (Figure 1.3).

Another interesting vantage point by which to proxy AI development is that of venture capital (VC) investments. VC investments can provide some context on a country’s entrepreneurial activity and sectoral specialisation. As shown in Figure 1.4a, VC investments in AI start-ups have seen a steep increase in the United States in recent years, and have resumed growth in China after declining in 2019. While the number of VC investments in AI start-ups has been consistently higher in the United States than in China – more than twice as many in 2020 – the median size of VC investments in Chinese start-ups has been considerably higher than in the United States since 2016 (Figure 1.4b). Multiple mega investments of more than USD 100 million in the Chinese mobility and autonomous vehicles industry – which is capital-intensive – support this finding.

In addition, AI technologies are being used in virtually all sectors of the economy, leading to a great diversity of systems. While the speed and scale of adoption varies across industries and firm sizes (OECD, 2019[2]), AI-powered applications have expanded beyond digital sectors to sectors like transportation, marketing, healthcare, finance and retail. As shown in Figure 1.5 a, the sum of venture capital investments in AI start-ups for all sectors has increased over twenty-eight fold between 2012 and 2020.2

The financial and insurance sector has consistently been within the top 10 industries in terms of the amount of VC investments in AI start-ups, with a total of over USD 4 billion worldwide in 2020 alone (Figure 1.5 a). That same year, almost 65% of VC investments in the financial and insurance sector went to American AI start-ups, following a dramatic increase in the past three years. In contrast, other countries have experienced a decline in VC investments in the financial and insurance sector, notably China (84% decrease from 2018 to 2020) and the United Kingdom (70% decrease since from 2019 to 2020) (Figure 1.5 b).

Recent large VC recipients for AI in the financial sector include US-based start-up Stripe, which develops and provides financial infrastructure solutions that enable companies to accept online payments, including a suite of modern tools for fraud detection and prevention based on machine learning techniques. VC recipients also included OakNorth UK, which operates an AI-integrated platform that provides online banking solutions such as personal saving accounts, loans, and business credit financing services.3

In view of its ability to perceive, learn from and interact with its environment with varying degrees of autonomy, AI promises substantial transformative benefits but also creates risks. As such, AI is a growing policy priority for all stakeholders (OECD, 2019[3]). While many countries already have dedicated AI strategies, AI remains a relatively new and challenging field for policy that requires adequate tools. One of the challenges faced by policy makers is to keep abreast of the rapid innovations taking place in the field.4 AI techniques have been evolving and diversifying apace into what is now described as a “family of technologies” (European Commission, 2021). AI includes systems that use human-generated representations (symbolic models) but also ones that identify patterns and extract knowledge from data (machine learning models) or combine both (hybrid models). AI can also be used to perform a variety of tasks, from identifying and categorising data; to detecting patterns, outliers or anomalies; to predicting future behaviours and courses of action (OECD, forthcoming[4]).

Framing the policy debate on AI thus requires agile frameworks that can capture technological developments and apply to a wide diversity of systems, subject to different contexts and sector dynamics. To this end, this section provides a brief introduction to AI systems and explores how three complementary frameworks can help structure AI policy discussions in finance (Figure 1.6):

  • the OECD AI Principles;

  • the AI system lifecycle stages;

  • the OECD framework for the classification of AI systems, based on the system’s context; data/input; AI model; and task/output.

In November 2018 the OECD and its group of experts on AI set out to characterise AI systems. The description aimed to be understandable, technically accurate, technology-neutral and applicable to short- and long-term time horizons. Importantly, it informed the development of the OECD AI Principles (OECD, 2019[3]). The resulting description of an AI system is broad enough to encompass many of the definitions of AI commonly used by the scientific, business and policy communities (Box 1.1).

As AI continues to diffuse apace, the diversity of AI systems increases: AI can power systems in different contexts (e.g. in different industries; for a variety of business functions; interacting with consumers or regulators; users that are AI experts or not), using different types of data (e.g. private or public data; structured or unstructured) and AI models (e.g. symbolic; probabilistic) to perform a range of tasks (e.g. forecasting; recognition; optimisation). These four dimensions, namely i) context, ii) data and input, iii) AI model iv) and task and output are the foundation of the OECD Framework for the Classification of AI Systems (OECD, forthcoming[4]). Recognising that different types of AI systems raise very different policy opportunities and challenges, the classification framework helps users to classify AI systems according to their potential impact on values and policy areas covered by the OECD AI Principles (see section 1.3.4).

These four dimensions of an AI system can be linked to the AI system lifecycle. The AI system lifecycle typically involves six specific phases: planning and design; data collection and processing; model building and interpretation; verification and validation; deployment; and operation and monitoring. Figure 1.8 illustrates how these phases can be mapped to the four dimensions of the classification framework. The AI system lifecycle phases often take place in an iterative manner and are not necessarily sequential. Importantly, the decision to retire an AI system from operation may occur at any point during the operation and monitoring phase.

Characteristics that make the AI system lifecycle unique – compared to traditional system development lifecycles – include that AI systems can interact with their (real or virtual) environment and “learn” to improve in a dynamic process. In addition, phases in the AI system lifecycle can be nonlinear and are capable of operating with varying degrees of autonomy (OECD, 2019[6]).

The OECD AI Principles, the AI system lifecycle and the OECD classification framework provide three relevant perspectives to assess the impacts of AI systems across different policy domains. In a context of high complexity and fast-changing technological trends, greater understanding of these impacts can set the course for informed AI policy design and implementation in the financial sector and beyond. Rather than taking a one-size-fits-all approach, policies may target specific principles, types of AI systems and/or activities in the AI system lifecycle to seize opportunities for innovation or to address risks.

An AI system differs from other computer systems by its ability to impact its environment with varying levels of autonomy (Box 1.1) and in some cases, to evolve and learn “in the field”. AI creates significant economic and social opportunities by changing how people work, learn, interact and live but also distinctive challenges for policy, including risks to human rights and democratic values.

The OECD AI Principles were adopted in May 2019 as the first intergovernmental standard focusing on policy issues that are specific to AI. The Principles aim to be implementable and flexible enough to stand the test of time (OECD, 2019[3]). The Principles include five high-level values-based principles and five recommendations for national policies and international co-operation (Table 1.1). The Principles offer a framework to think through and core values and policies that enable the deployment and use of trustworthy AI.

The first five Principles propose values to guide trustworthy AI deployment. Chief among them are the promotion of sustainable and inclusive societies (Principle 1.1) and the respect of human rights and democratic values (Principle 1.2). While AI can be leveraged for social good and contribute to achieving the Sustainable Development Goals (SDGs) in areas such as education, health, transport, agriculture and environment, it also poses the risk of transferring biases from the analogue into the digital world or to fundamental rights and freedoms (OECD, 2019[2]). For instance, AI can raise issues related to individuals’ right to privacy (e.g. AI systems inferring information about a person without consent) and individual self-determination (e.g. if users cannot opt out from using AI’s input that influences their choices). This calls for transparent and explainable AI systems (Principle 1.3) and clear accountability standards (Principle 1.5, Chapter 3). Some AI systems could raise safety and security concerns (Principle 1.4): for example, connected products such as driverless cars need to be sufficiently secure to impede malicious attacks that would put the physical safety of their passengers at risk.

In addition to being grounded in specific values, fostering the development and deployment of trustworthy AI calls for the design and implementation of tailored policies in various areas. This includes encouraging private investment and directing public investment towards AI research (Principle 2.1); fostering the infrastructure and mechanisms needed for AI, including computational power and data trusts (Principle 2.2); and designing an enabling policy environment to encourage innovation and competition for trustworthy AI (Principle 2.3). Trustworthy AI also requires labour policies that protect workers and build human capacity (Principle 2.4) to ensure the workforce, including regulators (Chapter 5), has the necessary skills for the jobs of the future (OECD, 2019[3]). Finally, given the global nature of AI, designing effective AI policy requires international co-operation (Principle 2.5), including on aspects like competition policy (Chapter 4).

Different environments raise significantly different challenges and the relevance of each Principle varies from one industrial sector to the next.

In the context of financial services, AI contributes to inclusive growth, sustainable development and well-being (Principle 1.1) through applications such as financial technology lending that widen people’s access to financial services and lower the costs faced by consumers (OECD, 2017[7]). At the same time, AI applications can raise fairness concerns if they exclude certain populations from essential financial services such as mortgage loans or pension plans (Principle 1.2).

Transparency and explainability (Principle 1.3) are key to trustworthy AI deployment in the financial sector: in customer-facing applications, transparency and explainability enable customers to understand and possibly challenge particular outcomes (Financial Conduct Authority, 2020[8]). Transparency focuses on disclosing when AI is being used; on enabling people to understand how an AI system is developed, trained, operates, and deployed; and on providing meaningful information and clarity about what information is provided and why. Explainability means enabling people affected by the outcome of an AI system to understand how it was arrived at (OECD, 2019[3]). Both transparency and explainability are critical to enable auditing and compliance. Albeit an area of ongoing research, certain types of AI models – such as machine learning neural networks that are abstract mathematical relationships between factors – may pose challenges for explainability as they can be extremely complex and difficult to understand and monitor (OECD, 2019[2]). They are commonly called “black box” systems.

Many financial services are considered to be critical infrastructure of which “the interruption or disruption would have serious consequences on: 1) the health, safety, and security of citizens; 2) the effective functioning of services essential to the economy and society, and of the government; or 3) economic and social prosperity more broadly” (OECD, 2019[2]); (OECD, 2019[9]). Critical infrastructure is accompanied by heightened risk considerations and ex ante regulations. In addition, financial services often process vast amounts of sensitive personal data. As such, ensuring the digital security, safety and robustness of AI systems (Principle 1.4) is particularly important in this sector. Clear accountability standards (Principle 1.5) for the developers and operators of AI systems in financial services are key to building trust in AI used in finance (Bank of England, 2019[10]).

Governments can foster trustworthy AI in finance by incentivising research that addresses societal considerations, such as widening access to financial services or improving system-wide risk management (Principle 2.1). At the same time, AI adoption in the financial sector requires infrastructure, including access to sufficient computational capacity, affordable high-speed broadband networks and services (Principles 2.2).

AI uptake in a highly-regulated sector such as finance could benefit from a policy environment that is flexible enough to keep up with technological and business model developments and promote innovation, yet remains safe and provides legal certainty (Principle 2.4). Regulatory sandboxes are increasingly being leveraged in the financial sector to this effect (see section 1.4.2). Labour market policies are also important to reskill and upskill finance practitioners, regulators and supervisors to adapt to new technologies and practices enabled by AI diffusion (Principle 2.4; see chapter 5).

Lastly, given the global nature of the financial sector (OECD, 2012[11]), international cooperation (Principle 2.5) can help set a level playing field for the safe deployment of AI and prevent systemic risk in the international financial system (European Banking Federation, 2019).

Another approach to consider policy implications posed by AI-enabled systems is to segment them by phase in the AI system lifecycle. AI applications in the financial sector include customer service chatbots, algorithmic financial planning, recommender systems for personalised financial products, automated check verification, and assessments for loan applications or insurance claims processing. Each of these AI system applications can be analysed using a lifecycle approach. For example, the following illustrates policy implications at each phase of the AI system lifecycle for AI-based fraud detection systems, which use machine learning on past transaction data to flag suspicious operations:

Planning and design: In fraud detection, banking professionals must weigh the financial loss of a fraudulent transaction against the eventual disruption to customers of inaccurately flagging a valid transaction. Implementation of an AI-enabled fraud detection system may require IT systems’ compatibility and workforce readiness assessments. Transparency, explainability and accountability requirements may affect the choice of the model, as well as regulatory constraints and the availability of appropriate data. Additionally, the level of human involvement in the process should be determined.

These and other trade-offs should be addressed in the planning and design phase by clearly identifying the goals of the fraud detection system at the onset.

Data collection and processing: Generally, fraud detection systems collect vast amounts of data containing sensitive information, including personal and geolocation data. In order to mitigate risks to users, data collection, storage and processing must comply with privacy and digital security standards and regulation. The relevant criteria should be in place to ensure that data are of good quality – representative, complete, and with low levels of “noise” – and appropriate for fraud detection purposes.

Data quality and appropriateness have important policy implications to human rights and fairness, as well as to the robustness of fraud detection systems. The data collection and processing phase must include actions to detect and mitigate potential biases, for instance by ensuring that fraud predictions are not influenced by “protected characteristics” – such as race and gender – to avoid biased outcomes.

Model building and interpretation: Fraud can be detected using several different algorithmic approaches. For example, several fraud detection systems combine supervised and unsupervised machine learning to detect known and unknown – previously unseen – anomalies in their transactions, respectively. However, unsupervised machine learning techniques might pose a challenge to transparency and explainability by making it more difficult to understand the output of the fraud detection system. More complex models are in general more difficult to explain, although the relationship between complexity and explainability is not necessarily linear.

Fraud detection systems that iterate and evolve over time in response to new data – changing their behaviour in unforeseen ways while in production – may pose robustness, fairness and liability implications.

Verification and validation: Inaccurate fraud detection could lead to erroneous outcomes that range from blocking innocent clients’ accounts to taking legal action against them. It is thus necessary to verify the accuracy and performance of the system against false positives and false negatives. This requires human-in-the-loop mechanisms to vet an AI system’s outcome as well as rigorous testing and calibration of the algorithms, including assessing outcome variations when the relevant variables in the training data are modified. The system should be accurate and produce consistent outputs: two similar-looking fraud cases should result in similar outcomes.

Adversarial evaluations – a technique that tests the robustness of a model by intentionally feeding it with deceptive data – of the fraud detection system should also be conducted during the verification and validation phase to test the security of the system. Additionally, a fraud detection system’s performance should be tested for bias.

Deployment: Deployment of the fraud detection system into live production entails implementing the system in a real-world setting. It involves checking its robustness, security and compatibility with legacy systems, as well as ensuring regulatory compliance and evaluating its impact on users. Deployment has organisational change implications, including workforce reskilling and upskilling.

Operation and monitoring: The level of autonomy with which the fraud detection system operates poses different policy considerations. On the one hand, high-autonomy fraud detection systems – with no human involvement – may put human rights and fundamental values at risk. They will also raise liability concerns. On the other hand, fraud detection systems may automate tasks that had previously been – or are currently being – executed by humans, impacting both job quantity and quality in the financial industry.

Additionally, fraud detection systems should be constantly monitored for fairness, security, transparency and explainability. Issues identified should be corrected by the AI actors involved at the relevant lifecycle phase (including data collectors, developers, modellers, and system integrators and operators). Retirement of a fraud detection system from operation should be possible at the operation and monitoring phase.

Alongside the frameworks provided by the OECD AI Principles and the AI system lifecycle, AI policy considerations can be informed by the type of AI system considered, including the specific context in which it is applied (OECD, forthcoming[4]).

Given the multitude of AI systems and their rapid evolution, differentiating these systems according to characteristics that are relevant to policy can be challenging. In response, the OECD’s Committee on Digital Economy Policy, through its OECD.AI Network of Experts, has developed a classification framework for AI systems (OECD, forthcoming[4]) to help policy makers differentiate various types of AI systems according to their potential impact on public policy in areas covered by the OECD AI Principles.

The Framework organises the different characteristics of AI systems across four key dimensions: the context in which a system operates; the data and input it uses; the AI model; and task and output performed (Figure 1.9). The Framework then links each of these characteristics, to relevant policy considerations. In doing so it seeks to create a user-friendly tool to help policy makers assess the opportunities and risks presented by specific system types to tailor regulation and policy accordingly.

First, the context in which the AI system is deployed is particularly relevant to policy, chiefly because the sector and business function are central parameters for policy design. An AI system deployed in finance has different policy considerations than a system deployed in healthcare. Within a given sector, the business function performed by a system provides further nuance: AI systems used to aid the hiring of financial professionals would pose fairness considerations, while systems used for compliance or information security raise issues around robustness and digital security. Other elements of context – such as the breadth of the system’s deployment or the degree of AI expertise of its users – are also important elements to consider when assessing the potential risks or impact of an AI system (OECD, forthcoming[4]).

Second, identifying the type of data or input used by AI systems provides useful insights to design the appropriate policy response. For instance, structured or tabular data are easier to document and audit than unstructured data (e.g. free text, sound, images, and video). This relates to transparency and accountability concerns, both relevant to AI deployment in the financial sector. If used to train applications to set credit scores or risk premia, datasets that are not representative of an institution’s existing and potential client base could be incompatible with fair access to essential financial services. As noted in chapter 2, the provenance of the data and the way they are collected can have specific privacy implications. Two common examples of sensitive data in the financial sector are observed: geolocation data collected with digital devices; and credit card transaction data.

Third, the type of AI model also bears consequences for policy: certain models are less transparent and explainable (e.g. neural networks that form mathematical relationships between factors that can be impossible for humans to understand) making compliance and auditing more complex (OECD, forthcoming[4]). In the context of financial services, the model type thus has implications for regulatory oversight and risk management. The AI model type also has implications for the robustness of the system: some machine learning models can fail in settings that are meaningfully different from those encountered in training (see chapter 2). To illustrate, AI-powered trading systems trained on long time series may not be able to perform well during one-off events, such as the COVID-19 outbreak that spread worldwide in 2020. This phenomenon – i.e. when a model’s target variable changes over time in unforeseen ways – is known as “concept drift”.

Lastly, the task(s) performed by an AI system imply(ies) different priorities for policy. For instance, AI systems that personalise financial offerings without letting users opt out can threaten individuals’ right to self-determination or privacy (OECD, forthcoming[4]). By contrast, AI systems performing recognition tasks – such as biometric identifiers commonly used in FinTech applications – may raise concerns in relation to privacy, robustness and security in case of adversarial attacks. As in other sectors, the level of autonomy of AI systems deployed in the financial sector will have direct implications on job quantity, quality and security by assisting humans with certain tasks or replacing humans in certain tasks through automation (e.g. in fraud detection, trading or customer service).

Countries are at different stages of their national AI strategies and policies. Canada, Finland, Japan were among the first to develop national AI strategies, setting targets and allocating budgets in 2017. Denmark, France, Germany, Korea, and the United States followed suit in 2018 and 2019. In 2020, countries continued to announce national AI strategies, including Bulgaria, Egypt, Hungary, Poland, and Spain. Brazil launched its national AI strategy in 2021. Several countries are in the consultation and development processes, such as Argentina, Chile, and Mexico (OECD, 2021[12]).

Policies relating to AI in finance include i) policies that promote the financial sector as a strategic area of focus in a country’s national AI strategy and support the use of AI systems in this sector; and ii) new regulations and guidance to address risks associated with the deployment of AI systems in the financial sector, including the provision of experimentation environments to foster innovation while securing consumer safeguards.

Building on the OECD.AI Policy Observatory’s database5 of national AI strategies and policies, this section provides an overview of how national AI strategies and policies seek to foster trustworthy AI in the financial sector.

National AI strategies and policies outline how countries plan to invest in AI to build or leverage their comparative advantage. Countries tend to prioritise a handful of economic sectors, including transportation, energy, health and agriculture (OECD, 2021[12]). Other service-oriented sectors, such as the financial sector, are also starting to be featured in national AI policies.

A few countries in which the financial sector accounts for a large share of GDP – including the United Kingdom, the United States, and Singapore – have articulated their ambition to promote the deployment and use of AI in the provision of financial services to maintain or increase their national competitiveness in this area. For example, the United Kingdom has invested in the use of AI in the financial services sector through the Next Generation Services Industrial Strategy Challenge. The challenge provides GBP 20 million (EUR 23 million) to create a network of collaborative Innovation Research Centres that develop AI and data-driven technologies in sectors such as accountancy, finance, insurance, and legal industries (UKRI, 2021[13]).

Singapore launched the Artificial Intelligence and Data Analytics Grant (AIDA) as part of the Financial Sector Technology and Innovation scheme under the Financial Sector Development Fund, to strengthen support for large-scale innovation projects, and build a stronger pipeline of Singaporean talent in FinTech. (MAS, 2019[14]). In August 2020, the Monetary Authority of Singapore (MAS) announced that it will commit SGD 250 million (EUR 153 million) until 2023 to accelerate technology and innovation-driven growth in the financial sector (MAS, 2020[15]). MAS will raise the maximum funding quantum for all qualifying AI projects under the AIDA Grant from SGD 1 million to SGD 1.5 million (EUR 922 000), to provide greater impetus for financial institutions to implement ground-breaking and innovative AI solutions.

In the United States, the Department of the Treasury is pursuing policies that promote the adoption of innovative tools such as AI and machine learning to empower people to make more informed decisions about their short-term and long-term financial goals (U.S. Department of the Treasury, 2018[16]).

Regulatory agencies are increasingly seeking ways to address the risks associated with the deployment of AI systems in the financial sector. These include risks to consumers’ financial inclusion and stability. They also include risks relating to privacy; unlawful discrimination; unfair, deceptive or abusive acts or practices; and the security and reliability of financial institutions.

National and international regulatory approaches to address these risks are at an early stage. To date, financial regulators have responded to AI developments in various ways: i) mapping and gathering information on financial institutions’ use of AI; ii) responding to developments in the financial (FinTech) and insurance (InsurTech) technology ecosystems by providing supervisory clarity and guidance for financial institutions and businesses using AI; iii) establishing regulatory sandboxes and innovation hubs to spur innovation in the financial sector (OECD, 2020[17]); and iv) developing specific regulations for the use of high-risk AI systems in the financial sector (see Figure 1.10). Additionally, some financial regulators are starting to use AI technologies for regulatory oversight and supervision (e.g. SupTech, see Chapter 5).

Box 1.2 discusses a selection of national AI regulatory approaches seeking to address risks and challenges related to the use of AI systems in the financial services sector.

In April 2021, the European Commission (EC) published a legislative proposal for a Coordinated European approach to address the human and ethical implications of AI. The draft legislation follows a horizontal and risk-based regulatory approach that differentiates between uses of AI that create i) minimal risk; ii) low risk; iii) high risk; and iv) unacceptable risk, for which the EC proposes a strict ban. With regards to the financial sector, the legislative proposal identifies that “AI systems [that are] used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services”. The EC legislative proposal requires that high-risk AI systems – including credit-scoring algorithms – abide by a risk management system, be continuously maintained and documented throughout their lifetime and enable interpretability of their outcomes and human oversight. The proposal also encourages European countries to establish AI regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight (European Commission, 2021[29]).

At a global level, national regulators take part in the Global Financial Innovation Network (GFIN), a “global sandbox” initiative led by theUK’s FCA to help those firms which operate across more than one country to co-ordinate with different regulators and enable cross-border testing among sandboxes. The GFIN, which includes more than 50 financial authorities, central banks and international organisations, reflects the widespread desire to provide FinTech firms with an environment to test new technologies, including AI. Despite these efforts, there is still a lack of harmonised criteria – for instance, on what constitutes innovativeness or “genuine innovation” – and further cohesion is needed in terms of a common set of legal standards (Muñoz Ferrandis, 2021[30]).

References

[10] Bank of England (2019), Managing Machines: the governance of artificial intelligence, https://www.bankofengland.co.uk/-/media/boe/files/speech/2019/managing-machines-the-governance-of-artificial-intelligence-speech-by-james-proudman.pdf?la=en&hash=8052013DC3D6849F91045212445955245003AD7D.

[18] Bank of England and FCA (2019), Machine learning in UK financial services, https://www.bankofengland.co.uk/-/media/boe/files/report/2019/machine-learning-in-uk-financial-services.pdf?la=en&hash=F8CA6EE7A5A9E0CB182F5D568E033F0EB2D21246.

[21] BIS (2020), Inside the regulatory sandbox: effects on fintech funding, https://www.bis.org/publ/work901.htm (accessed on 15 May 2021).

[25] CFPB (2021), Innovation at the Bureau, https://www.consumerfinance.gov/rules-policy/innovation/.

[27] Datatilsynet (2021), Sandbox for responsible artificial intelligence, https://www.datatilsynet.no/en/regulations-and-tools/sandbox-for-artificial-intelligence/.

[28] DFSA (2019), Recommendations when using supervised machine learning, https://www.dfsa.dk/Supervision/Fintech/Machine_learning_recommendations.

[33] European Banking Federation (2019), EBF position paper on AI in the banking, https://www.ebf.eu/wp-content/uploads/2020/03/EBF-AI-paper-_final-.pdf.

[29] European Commission (2021), “Proposal for a Regulation of the European Parliament and of the Council laying down Harmonised Rules on Artificial Intelligence and amending certain Union Legislative Acts (Artificial Intelligence Act)”, COM(2021) 206, https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-european-approach-artificial-intelligence.

[19] FCA (2021), Artificial Intelligence Public-Private Forum - Second meeting (Minutes), https://www.bankofengland.co.uk/-/media/boe/files/minutes/2021/aippf-minutes-february-2021.pdf?la=en&hash=9D5EA2D09F4D3B8527768345D472D9253906ADA1 (accessed on 6 May 2021).

[20] FCA (2021), Regulatory sandbox, https://www.fca.org.uk/firms/innovation/regulatory-sandbox.

[8] Financial Conduct Authority (2020), “AI transparency in financial services - why, what, who, when?”, Insight, https://www.fca.org.uk/insight/ai-transparency-financial-services-why-what-who-and-when.

[34] Financial Stability Board (2017), Artificial intelligence and machine learning in financial services: Market developments and financial stability implications, https://www.fsb.org/wp-content/uploads/P011117.pdf.

[31] FSB (2020), BigTech Firms in Finance in Emerging Market and Developing Economies, http://www.fsb.org/emailalert (accessed on 12 January 2021).

[24] FTC (2021), “Aiming for truth, fairness, and equity in your company’s use of AI”, Business Blog, Elisa Jillson, https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.

[23] FTC (2020), Using Artificial Intelligence and Algorithms, https://www.ftc.gov/news-events/blogs/business-blog/2020/04/using-artificial-intelligence-algorithms.

[36] MAS (2021), Veritas Initiative Addresses Implementation Challenges in the Responsible Use of Artificial Intelligence and Data Analytics, https://www.mas.gov.sg/news/media-releases/2021/veritas-initiative-addresses-implementation-challenges (accessed on 6 May 2021).

[15] MAS (2020), MAS Commits S$250 Million to Accelerate Innovation and Technology Adoption in Financial Sector, https://www.mas.gov.sg/news/media-releases/2020/mas-commits-s$250-million-to-accelerate-innovation-and-technology-adoption-in-financial-sector (accessed on 18 May 2021).

[14] MAS (2019), Artificial Intelligence and Data Anlytics Grant, https://www.mas.gov.sg/schemes-and-initiatives/Artificial-Intelligence-and-Data-Analytics-AIDA-Grant (accessed on 6 May 2021).

[26] MAS (2018), Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector, https://www.mas.gov.sg/-/media/MAS/News-and-Publications/Monographs-and-Information-Papers/FEAT-Principles-Updated-7-Feb-19.pdf (accessed on 6 May 2021).

[30] Muñoz Ferrandis, C. (2021), Fintech Sandboxes and Regulatory Interoperability, https://law.stanford.edu/2021/04/14/fintech-sandboxes-and-regulatory-interoperability/ (accessed on 6 May 2021).

[12] OECD (2021), State of Implementation of the OECD AI Principles: Insights from National AI Policies, https://one.oecd.org/document/DSTI/CDEP(2020)15/REV1/en/pdf.

[17] OECD (2020), The Impact of Big Data and Artificial Intelligence (AI) in the Insurance Sector.

[2] OECD (2019), Artificial Intelligence in Society, OECD Publishing, Paris, https://dx.doi.org/10.1787/eedfee77-en.

[5] OECD (2019), Measuring the Digital Transformation: A Roadmap for the Future, OECD Publishing, https://doi.org/10.1787/9789264311992-en.

[3] OECD (2019), “Recommendation of the Council on Artificial Intelligence”, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.

[9] OECD (2019), Recommendation of the Council on Digital Security of Critical Activities, OECD, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0456.

[1] OECD (2019), Scoping the OECD AI Principles, OECD Publishing, https://doi.org/10.1787/d62f618a-en.

[6] OECD (2019), “Scoping the OECD AI Principles: Deliberations of the Expert Group on Artificial Intelligence at the OECD (AIGO)”, in OECD Digital Economy Papers, No. 291, https://doi.org/10.1787/d62f618a-en.

[35] OECD (2018), “AI: Intelligent machines, smart policies: Conference summary”, OECD Digital Economy Papers, No. 270, OECD Publishing, Paris, https://dx.doi.org/10.1787/f1a650d9-en.

[7] OECD (2017), How’s Life? 2017: Measuring Well-being, OECD Publishing, Paris, https://dx.doi.org/10.1787/how_life-2017-en.

[32] OECD (2017), The Next Production Revolution: Implications for Governments and Business, OECD Publishing, https://dx.doi.org/10.1787/9789264271036-en.

[11] OECD (2012), Systemic Financial Risk, OECD Reviews of Risk Management Policies, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264167711-en.

[4] OECD (forthcoming), OECD Framework for the Classification of AI systems - Preliminary findings, OECD Publishing, Paris.

[39] OECD (forthcoming), Venture Capital Investments in Artificial Intelligence: Analysing trends in VC in AI companies from 2012 through 2020, Digital Economy Paper Series, OECD Publishing, Paris.

[37] OECD.AI (2021), “Database of national AI strategies - Singapore”, Powered by EC/OECD (2021), STIP Compass database, https://www.oecd.ai/dashboards/policy-initiatives/2019-data-policyInitiatives-24572.

[38] Prudential Regulation Authority (2018), The Prudential Regulation Authority’s approach to banking supervision, https://www.bankofengland.co.uk/-/media/boe/files/prudential-regulation/approach/banking-approach-2018.pdf?la=en&hash=3445FD6B39A2576ACCE8B4F9692B05EE04D0CFE3.

[16] U.S. Department of the Treasury (2018), A Financial System that Creates Economic Opportunities, https://home.treasury.gov/sites/default/files/2018-07/A-Financial-System-that-Creates-Economic-Opportunities---Nonbank-Financi....pdf (accessed on 6 May 2021).

[13] UKRI (2021), Next generation services challenge, https://www.ukri.org/our-work/our-main-funds/industrial-strategy-challenge-fund/artificial-intelligence-and-data-economy/next-generation-services-challenge/#:~:text=This%20challenge%20supports%20the%20UK's,80%25%20of%20the%20UK%20economy. (accessed on 6 May 2021).

[22] US Federal Register (2021), Request for Information and Comment on Financial Institutions’ Use of Artificial Intelligence, Including Machine Learning, https://www.federalregister.gov/documents/2021/03/31/2021-06607/request-for-information-and-comment-on-financial-institutions-use-of-artificial-intelligence.

Notes

← 1. Fuzzy set theory permits the gradual assessment of the membership of elements in a set, in contrast to classical set theory where the membership of elements in a set is assessed in binary terms, e.g. an element either belongs or does not belong to the set. By allowing for intermediate possibilities – which is similar to how humans make decisions – fuzzy sets provide additional flexibility. Fuzzy sets are commonly used in AI applications, including natural language processing and expert systems.

← 2. Contrastingly, sectors like the media, business support and healthcare are particularly dynamic in terms of number of deals made (OECD, forthcoming[39]).

← 3. “Top start-ups per country and industry” visualisation, accessible at https://oecd.ai/data-from-partners.

← 4. For instance, recent patent data show that AI-related inventions have accelerated since 2010 and continue to grow at a much faster pace than is observed on average across all patent domains (OECD, 2019[5]).

← 5. For more information, please visit www.oecd.ai/dashboards.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2021

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at http://www.oecd.org/termsandconditions.