By in Persepective June 13, 2025

BPI Navigating AI in Banking

img

I. Introduction

Banking organizations[1] have a proven track record of successfully deploying new technologies while continuing to operate in a safe and sound manner and adhering to regulatory requirements.[2] Throughout the years, banking organizations and financial institutions have digitized, gone online, transitioned to mobile services, automated processes, moved infrastructure into the cloud and adopted many other technologies, including machine learning, a form of AI. Many of these new technologies have presented new risks or amplified pre-existing risks, yet banking organizations have been able to manage these risks effectively and evolve to better serve their customers.

Artificial intelligence (AI)—or the ability of a computer to learn or engage in tasks typically associated with human cognition—has received a great deal of attention recently from the public, businesses and government officials. In October 2023, the Biden Administration issued its “Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence” (the AI Executive Order),[3] outlining the Administration’s eight principles for governing the development and use of AI, which include, among other things, ensuring the safety and security of AI technology, promoting innovation and competition and protecting consumers and privacy. The AI Executive Order also directs various government agencies to take actions to promote those goals and affirms that “[h]arnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks.”[4] More recently, in January 2024, the House Financial Services Committee announced the formation of a bipartisan working group to “explore how [AI] is impacting the financial services and housing industries.”[5] AI has also received attention within the banking industry, with banking organizations and their regulatory agencies exploring the potential benefits and potential risks of AI and how the industry may continue to evolve in a safe and sound manner as the technology continues to advance.

Although attention to AI has increased markedly with the broad availability of relatively new technologies like large language models (LLMs), AI is not new. The conceptual foundations of AI were first articulated in scientific literature as early as the late 1940s,[6] and the term “artificial intelligence” was itself coined in 1955.[7] One of the challenges of any discussion of AI is determining the scope of what is meant by “AI.” In this paper, the terms “AI,” “AI model” and “generative AI” have the meanings used in the AI Executive Order[8] and can include a wide range of potential models, processes and use cases that incorporate AI.[9]

Banking organizations may use AI in connection with a variety of activities, including fraud detection, cybersecurity, customer service (such as chatbots) and automated digital investment advising. As with other new technologies, banking organizations have implemented and governed these and other uses of AI within existing risk management frameworks in accordance with applicable regulations, guidance and supervisory expectations. In fact, the integration of AI in the form of machine learning within the financial services sector traces its origins to the 1980s,[10] when it was primarily employed to identify and counteract fraudulent activities. It has expanded its application to a variety of use cases since.[11] This paper describes some of the guidance relevant to the use of AI, while recognizing that there is no “one-size-fits-all” approach to AI risk management. Risk management practices will vary depending on the AI technology, application, context, expected outputs and potential risks specific to the individual organization. In addition to the existing guidance, banking organizations also recognize that existing laws are applicable to the use of AI in the various contexts in which it may be employed and take those laws into account when considering particular use cases.[12]

II. Harnessing AI: Governance and Risk Management for Resilience and Innovation

AI is one of the latest of many technologies that have been, or are in the process of being, implemented by banking organizations. AI has a wide range of potential capabilities, is rapidly evolving and may be incorporated in numerous and highly diverse use cases, creating both opportunities and potential risks for banking organizations. This paper outlines the governance and risk management principles already established by the banking agencies that provide an overarching framework for banking organizations to implement AI in a safe, sound and “fair” manner. The comprehensive approach to risk management required by the banking agencies allows banking organizations to utilize their risk management practices to address evolving technologies and associated potential risks. This is particularly important in the AI context given the speed at which AI technologies are developing. Banking organizations must be able to act quickly to identify, evaluate, monitor and manage risks posed by emerging AI technologies, and use currently available risk management processes to do so.

This paper discusses that (1) while AI’s applications will differ based on the nature of the AI and the applicable use case and business context, banking organizations’ existing governance and risk management principles provide a framework for consistency, coordination and adaptability in the face of the opportunities and potential risks posed by AI, and (2) given the dynamic nature of AI and the potential use cases, continued partnership with the banking and financial sector agencies is necessary to ensure that the sector’s approach to AI remains both responsive and aligned with regulations, guidance and the broader objectives of financial markets safety and soundness and consumer protections.

Responsible implementation of AI benefits from a deliberate approach from regulators and other stakeholders as all parties continue to learn how best to address challenges and take advantage of opportunities in this space. That approach must balance the opportunities and potential risks presented by AI, as well as the need of banking organizations and regulators to consider evolving circumstances. It is in everyone’s best interests for AI tools to be implemented in a safe, sound and fair manner, enabling banking organizations and their customers to benefit from new AI capabilities while appropriately mitigating risks. Those goals are best served by banking organizations and regulators working together to share information and identify benefits and risks, as well as appropriate mitigation strategies. BPI[13] and its technology policy division known as BITS[14], looks forward to continuing to work with its members, the federal banking agencies and other U.S. government offices to facilitate future collaboration and consultations as the AI landscape evolves.[15]

To lay a common groundwork for future conversations, this paper highlights some elements of enterprise risk management (ERM), including risk governance, model risk management, data risk management and third-party risk management, that provide a framework within which banking organizations can identify, assess, manage and monitor the potential risks that may be posed by emerging AI technologies. Through these frameworks, banking organizations have the tools to effectively manage risks posed by AI, even while AI, its use cases and the application of these frameworks to AI are evolving.

III. Embracing Emerging Benefits and Understanding Potential Risks

Integrating AI into the banking sector offers potential benefits, including processing information and detecting patterns with greater efficiency and effectiveness by augmenting human capabilities. The ability of AI to analyze vast, complex datasets can reveal trends and anomalies beyond human detection, enhance decision-making and potentially reduce bias are some of the many new and or advanced outcomes that AI provides. AI tools employing machine learning (ML) have the ability to continuously learn and adapt, improving their pattern recognition capabilities. Even so, AI also has the potential to exacerbate biases within a model or data set which can produce inaccurate or misleading results. Further, the opacity of certain AI models’ methods can present challenges for users to identify and correct for inaccuracies or biases.

The adoption of any new technology requires consideration of its risks and rewards, and banking organizations rely on their robust governance and risk management practices to do so. As BPI has noted in connection with the implementation of other emerging technologies, managing risk is fundamental to the business of banking and it is imperative for banking organizations to assess and manage possible risks and benefits in all aspects of their businesses.[16] Responsible implementation of AI in the banking sector hinges on many factors, including integrating established risk management practices, such as model risk management, risk governance and third-party risk management. This approach to risk management can help to confirm that AI’s performance and outputs meet expectations and allow banking organizations to adapt to evolving risks.

Certain of these established risk management practices, including validation protocols, thorough testing of modeled outputs and ongoing monitoring of AI tools for continuous assessing of model quality, drift in performance and robustness will all play an important role in light of the unique characteristics of certain AI tools. For example, the validation process for an AI tool may benefit from additional or modified human input or intervention. “Human in the loop validation” is useful to validate many AI tools, and is especially important in the specific context of generative AI due to its inherent ability to hallucinate, or produce false or misleading information presented as fact. AI performance can also be evaluated through metrics, including those that measure performance over time, precision, recall and accuracy, among other things. Such metrics will be evaluated by automatic evaluation, human evaluation or a combination of both. Explainability must be considered in applying risk management principles, especially for generative AI technology. Fundamentally, explainability refers to the capacity to discern how outputs are generated in a consistent and understandable manner. Many AI models, especially those employing complex algorithms like deep neural networks, generate outputs where neither the user nor the developer can easily or comprehensively discern the basis for why one or more of the outputs were generated. Practices around data input, decision-making criteria and weighting of those criteria, assurance review and others are being developed to ensure that validation processes keep pace with technology. Likewise, the field of explainable AI, which aims to demystify AI models and make their operations more transparent and understandable, is in its early stages and continuing to develop.[17] This includes developing methodologies to trace how AI models process inputs into outputs and to understand the states of the models before and after processing. This would include, but not be limited to, model evaluation with a primary focus on overall LLM performance and system evaluation with a primary focus on the effectiveness of LLMs in specific use cases.

Read More

Find the book on Amazon

Related Posts

CRMP Article on CSO Online
Books

CRMP Article on CSO Online

The authors of the new Cyber Risk Management Program framework explain how it can set an organization up to better comply with SEC and other disclosure and reporting regulations. In...

BPI Navigating AI in Banking
Books

BPI Navigating AI in Banking

I. Introduction Banking organizations[1] have a proven track record of successfully deploying new technologies while continuing to operate in a safe and sound manner and adhering to regulatory requirements.[2] Throughout the years,...

E-Book Release of ‘Building a Cyber Risk Management Program’
Books

E-Book Release of ‘Building a Cyber Risk Management Program’

The digital frontier and regulatory environment is ever-changing, and keeping pace means evolving with it. In anticipation of our comprehensive guide in hardcover, we are excited to announce the release...

A Professional Path
Leadership

A Professional Path

Enterprise security risk management will raise the profile of security from a task-bound trade to one of the key business drivers in the C-suite.Read More

Introducing Enterprise Security Risk Management (ESRM)
Strategy

Introducing Enterprise Security Risk Management (ESRM)

Introducing Enterprise Security Risk Management (ESRM) Written by esrmprod In the course of a security career that now stretches back decades, I’ve spoken with hundreds and hundreds of security practitioners....

ESRM and ERM…Clarifying the Differences
Execution

ESRM and ERM…Clarifying the Differences

I used to write “ESRM vs ERM”, but as this ESRM conversation continues to mature, I see I was wrong.  It’s faulty logic to think that there is a binary...