Building a Cyber Risk Management Program: Evolving Security for the Digital Age

Cyber risk management is one of the most urgent issues facing enterprises today. This book presents a detailed framework for designing, developing, and implementing a cyber risk management program that addresses your company’s specific needs. Ideal for corporate directors, senior executives, security risk practitioners, and auditors at many levels, this guide offers both the strategic insight and tactical guidance you’re looking for.

You’ll learn how to define and establish a sustainable, defendable, cyber risk management program, and the benefits associated with proper implementation. Cyber risk management experts Brian Allen and Brandon Bapst, working with writer Terry Allan Hicks, also provide advice that goes beyond risk management. You’ll discover ways to address your company’s oversight obligations as defined by international standards, case law, regulation, and board-level guidance.

This book helps you:

  • Understand the transformational changes digitalization is introducing, and new cyber risks that come with it
  • Learn the key legal and regulatory drivers that make cyber risk management a mission-critical priority for enterprises
  • Gain a complete understanding of four components that make up a formal cyber risk management program
  • Implement or provide guidance for a cyber risk management program within your enterprise

Navigating AI in Banking

I. Introduction

Banking organizations[1] have a proven track record of successfully deploying new technologies while continuing to operate in a safe and sound manner and adhering to regulatory requirements.[2] Throughout the years, banking organizations and financial institutions have digitized, gone online, transitioned to mobile services, automated processes, moved infrastructure into the cloud and adopted many other technologies, including machine learning, a form of AI. Many of these new technologies have presented new risks or amplified pre-existing risks, yet banking organizations have been able to manage these risks effectively and evolve to better serve their customers.

Artificial intelligence (AI)—or the ability of a computer to learn or engage in tasks typically associated with human cognition—has received a great deal of attention recently from the public, businesses and government officials. In October 2023, the Biden Administration issued its “Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence” (the AI Executive Order),[3] outlining the Administration’s eight principles for governing the development and use of AI, which include, among other things, ensuring the safety and security of AI technology, promoting innovation and competition and protecting consumers and privacy. The AI Executive Order also directs various government agencies to take actions to promote those goals and affirms that “[h]arnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks.”[4] More recently, in January 2024, the House Financial Services Committee announced the formation of a bipartisan working group to “explore how [AI] is impacting the financial services and housing industries.”[5] AI has also received attention within the banking industry, with banking organizations and their regulatory agencies exploring the potential benefits and potential risks of AI and how the industry may continue to evolve in a safe and sound manner as the technology continues to advance.

Although attention to AI has increased markedly with the broad availability of relatively new technologies like large language models (LLMs), AI is not new. The conceptual foundations of AI were first articulated in scientific literature as early as the late 1940s,[6] and the term “artificial intelligence” was itself coined in 1955.[7] One of the challenges of any discussion of AI is determining the scope of what is meant by “AI.” In this paper, the terms “AI,” “AI model” and “generative AI” have the meanings used in the AI Executive Order[8] and can include a wide range of potential models, processes and use cases that incorporate AI.[9]

Banking organizations may use AI in connection with a variety of activities, including fraud detection, cybersecurity, customer service (such as chatbots) and automated digital investment advising. As with other new technologies, banking organizations have implemented and governed these and other uses of AI within existing risk management frameworks in accordance with applicable regulations, guidance and supervisory expectations. In fact, the integration of AI in the form of machine learning within the financial services sector traces its origins to the 1980s,[10] when it was primarily employed to identify and counteract fraudulent activities. It has expanded its application to a variety of use cases since.[11] This paper describes some of the guidance relevant to the use of AI, while recognizing that there is no “one-size-fits-all” approach to AI risk management. Risk management practices will vary depending on the AI technology, application, context, expected outputs and potential risks specific to the individual organization. In addition to the existing guidance, banking organizations also recognize that existing laws are applicable to the use of AI in the various contexts in which it may be employed and take those laws into account when considering particular use cases.[12]

II. Harnessing AI: Governance and Risk Management for Resilience and Innovation

AI is one of the latest of many technologies that have been, or are in the process of being, implemented by banking organizations. AI has a wide range of potential capabilities, is rapidly evolving and may be incorporated in numerous and highly diverse use cases, creating both opportunities and potential risks for banking organizations. This paper outlines the governance and risk management principles already established by the banking agencies that provide an overarching framework for banking organizations to implement AI in a safe, sound and “fair” manner. The comprehensive approach to risk management required by the banking agencies allows banking organizations to utilize their risk management practices to address evolving technologies and associated potential risks. This is particularly important in the AI context given the speed at which AI technologies are developing. Banking organizations must be able to act quickly to identify, evaluate, monitor and manage risks posed by emerging AI technologies, and use currently available risk management processes to do so.

This paper discusses that (1) while AI’s applications will differ based on the nature of the AI and the applicable use case and business context, banking organizations’ existing governance and risk management principles provide a framework for consistency, coordination and adaptability in the face of the opportunities and potential risks posed by AI, and (2) given the dynamic nature of AI and the potential use cases, continued partnership with the banking and financial sector agencies is necessary to ensure that the sector’s approach to AI remains both responsive and aligned with regulations, guidance and the broader objectives of financial markets safety and soundness and consumer protections.

Responsible implementation of AI benefits from a deliberate approach from regulators and other stakeholders as all parties continue to learn how best to address challenges and take advantage of opportunities in this space. That approach must balance the opportunities and potential risks presented by AI, as well as the need of banking organizations and regulators to consider evolving circumstances. It is in everyone’s best interests for AI tools to be implemented in a safe, sound and fair manner, enabling banking organizations and their customers to benefit from new AI capabilities while appropriately mitigating risks. Those goals are best served by banking organizations and regulators working together to share information and identify benefits and risks, as well as appropriate mitigation strategies. BPI[13] and its technology policy division known as BITS[14], looks forward to continuing to work with its members, the federal banking agencies and other U.S. government offices to facilitate future collaboration and consultations as the AI landscape evolves.[15]

To lay a common groundwork for future conversations, this paper highlights some elements of enterprise risk management (ERM), including risk governance, model risk management, data risk management and third-party risk management, that provide a framework within which banking organizations can identify, assess, manage and monitor the potential risks that may be posed by emerging AI technologies. Through these frameworks, banking organizations have the tools to effectively manage risks posed by AI, even while AI, its use cases and the application of these frameworks to AI are evolving.

III. Embracing Emerging Benefits and Understanding Potential Risks

Integrating AI into the banking sector offers potential benefits, including processing information and detecting patterns with greater efficiency and effectiveness by augmenting human capabilities. The ability of AI to analyze vast, complex datasets can reveal trends and anomalies beyond human detection, enhance decision-making and potentially reduce bias are some of the many new and or advanced outcomes that AI provides. AI tools employing machine learning (ML) have the ability to continuously learn and adapt, improving their pattern recognition capabilities. Even so, AI also has the potential to exacerbate biases within a model or data set which can produce inaccurate or misleading results. Further, the opacity of certain AI models’ methods can present challenges for users to identify and correct for inaccuracies or biases.

The adoption of any new technology requires consideration of its risks and rewards, and banking organizations rely on their robust governance and risk management practices to do so. As BPI has noted in connection with the implementation of other emerging technologies, managing risk is fundamental to the business of banking and it is imperative for banking organizations to assess and manage possible risks and benefits in all aspects of their businesses.[16] Responsible implementation of AI in the banking sector hinges on many factors, including integrating established risk management practices, such as model risk management, risk governance and third-party risk management. This approach to risk management can help to confirm that AI’s performance and outputs meet expectations and allow banking organizations to adapt to evolving risks.

Certain of these established risk management practices, including validation protocols, thorough testing of modeled outputs and ongoing monitoring of AI tools for continuous assessing of model quality, drift in performance and robustness will all play an important role in light of the unique characteristics of certain AI tools. For example, the validation process for an AI tool may benefit from additional or modified human input or intervention. “Human in the loop validation” is useful to validate many AI tools, and is especially important in the specific context of generative AI due to its inherent ability to hallucinate, or produce false or misleading information presented as fact. AI performance can also be evaluated through metrics, including those that measure performance over time, precision, recall and accuracy, among other things. Such metrics will be evaluated by automatic evaluation, human evaluation or a combination of both. Explainability must be considered in applying risk management principles, especially for generative AI technology. Fundamentally, explainability refers to the capacity to discern how outputs are generated in a consistent and understandable manner. Many AI models, especially those employing complex algorithms like deep neural networks, generate outputs where neither the user nor the developer can easily or comprehensively discern the basis for why one or more of the outputs were generated. Practices around data input, decision-making criteria and weighting of those criteria, assurance review and others are being developed to ensure that validation processes keep pace with technology. Likewise, the field of explainable AI, which aims to demystify AI models and make their operations more transparent and understandable, is in its early stages and continuing to develop.[17] This includes developing methodologies to trace how AI models process inputs into outputs and to understand the states of the models before and after processing. This would include, but not be limited to, model evaluation with a primary focus on overall LLM performance and system evaluation with a primary focus on the effectiveness of LLMs in specific use cases.

Read More

Navigating AI in the Financial Sector: Practitioners Guide to Explainability

Artificial intelligence is rapidly reshaping the wealth management landscape—from automated trading and
personalized portfolio management to sophisticated client analytics. For many firms, including smaller and
privately held entities, AI has become a mission-critical component of daily operations. Yet with these
opportunities come heightened legal, fiduciary, and reputational risks. This paper explores the legal and regulatory context, examines emerging liabilities, and outlines best practices for establishing robust AI governance frameworks.

Read More

Enterprise Security Risk Management: Concepts and Applications

As a security professional, have you found that you and others in your company do not always define “security” the same way? Perhaps security interests and business interests have become misaligned. Brian Allen and Rachelle Loyear offer a new approach: Enterprise Security Risk Management (ESRM). By viewing security through a risk management lens, ESRM can help make you and your security program successful.

In their long-awaited book, based on years of practical experience and research, Brian Allen and Rachelle Loyear show you step-by-step how Enterprise Security Risk Management (ESRM) applies fundamental risk principles to manage all security risks. Whether the risks are informational, cyber, physical security, asset management, or business continuity, all are included in the holistic, all-encompassing ESRM approach which will move you from task-based to risk-based security.

  • How is ESRM familiar? As a security professional, you may already practice some of the components of ESRM. Many of the concepts – such as risk identification, risk transfer and acceptance, crisis management, and incident response – will be well known to you.
  • How is ESRM new? While many of the principles are familiar, the authors have identified few organizations that apply them in the comprehensive, holistic way that ESRM represents – and even fewer that communicate these principles effectively to key decision-makers.
  • How is ESRM practical? ESRM offers you a straightforward, realistic, actionable approach to deal effectively with all the distinct types of security risks facing you as a security practitioner. ESRM is performed in a life cycle of risk management including:
    • Asset assessment and prioritization.
    • Risk assessment and prioritization.
    • Risk treatment (mitigation).
    • Continuous improvement.

Throughout Enterprise Security Risk Management: Concepts and Applications, the authors give you the tools and materials that will help you advance you in the security field, no matter if you are a student, a newcomer, or a seasoned professional. Included are realistic case studies, questions to help you assess your own security program, thought-provoking discussion questions, useful figures and tables, and references for your further reading.

By redefining how everyone thinks about the role of security in the enterprise, your security organization can focus on working in partnership with business leaders and other key stakeholders to identify and mitigate security risks. As you begin to use ESRM, following the instructions in this book, you will experience greater personal and professional satisfaction as a security professional – and you’ll become a recognized and trusted partner in the business-critical effort of protecting your enterprise and all its assets.


FIND THE BOOK ON AMAZON

Harvard AI Governance Response

The dialogue on artificial intelligence governance is crowded with false choices. The recent paper from
Harvard Kennedy School, “Governance at a Crossroads,” provides a pivotal contribution by rightly
reframing the debate. It moves us beyond the sterile argument of whether to regulate and toward the far
more critical question of how to govern a technology that evolves at an exponential pace.

Read More

AI Governance –The Cornerstone of Communal Responsibility

The adoption of generative artificial intelligence (Gen AI) in the financial sector is unlocking significant opportunities for innovation, operational efficiency, stronger resilience and enhanced customer experience. As financial institutions embrace these innovative technologies, they are also proactively addressing the complexities associated with explainability, transparency, interpretability, and trust. By leveraging their existing strengths in risk management and governance, institutions are setting a foundation for responsible and transformative Gen AI implementation.

Read More

CRMP Article on CSO Online

The authors of the new Cyber Risk Management Program framework explain how it can set an organization up to better comply with SEC and other disclosure and reporting regulations.
In a landmark enforcement action that has become a transformational moment for CISOs and corporate cybersecurity practices, the US Securities and Exchange Commission (SEC) charged the SolarWinds Corporation and its CISO, Timothy Brown, with fraud and financial disclosure failures related to their cyber risk management practices. This case, stemming from the infamous SUNBURST cyberattack, highlights the grave consequences of inadequate cybersecurity risk management and disclosure practices. The development and implementation of a defined cyber risk management program will be necessary to protect against this new liability.

The SUNBURST attack, attributed to Russian state-sponsored hackers, exploited vulnerabilities in SolarWinds’ network to insert malicious code into the company’s Orion software, affecting over 18,000 global customers. Internal communications revealed that Brown and SolarWinds employees were aware of significant cybersecurity deficiencies, including issues in developing secure products and access control failures. Despite this knowledge, SolarWinds posted what the SEC said were misleading statements about its cybersecurity practices, suggesting a more secure environment than what existed internally.

The SEC’s complaint alleges that from at least October 2018 through January 2021, SolarWinds and Brown engaged in a series of misstatements and omissions, painting a false picture of the company’s cybersecurity controls, and exposing investors to undisclosed risks. The SEC’s action against Brown marks a significant shift, holding individuals personally liable for cybersecurity-related disclosure deficiencies. Unlike other cases based on claims of negligence and bad security hygiene, the fundamentals of this case revolve around risk management – in particular the ability to properly identify risks, escalate those risks, and meet mandated disclosure obligations. This case underscores the critical need for CISOs to move beyond ad-hoc risk practices and implement clearly defined cyber risk management programs to navigate these heightened regulatory expectations effectively.

Current cyber risk management practices often lack a systematic approach and instead rely on ad-hoc risk tools and processes. These are supported by governance structures that function merely as informed bodies, failing to fulfill their intended purpose of providing effective oversight for a cyber risk management program. This absence of a standalone and clearly defined cyber risk program exposes executives, board members, and now CISOs to emerging obligations.


Read More

ESRM and ERM…Clarifying the Differences

I used to write “ESRM vs ERM”, but as this ESRM conversation continues to mature, I see I was wrong.  It’s faulty logic to think that there is a binary choice.  It is also not accurate to say ESRM and ERM are the same thing and you only need one or the other.  They are not the same concept and you don’t need to choose one or the other.  In every way they’re complimentary to each other.  They share similar concepts.  They are built from the same risk principles as any risk program inherently is built. There is still confusion regarding the two, so let’s clear up the faulty logic thinking they are the same.

First, let’s start with the similarities.  ESRM, like ERM is a risk based practice.  There are many risk practices: safety, insurance, financial risk management, and business continuity to name a few.  They all use risk principles in their discipline that is then applied to topics, or a scope of responsibility to be implemented.  The risk principles used are fairly static.  Risk principles come in various forms and use different terms, but the risk paradigm is static.  So, in a way, they’re very similar; definitely of the same family, yet more like cousins.  Either way, you know what you can’t do…you can’t marry the two.

What makes them different is their focus and implementation.  I’ll start with ERM.  ERM, using risk principles, applies the practice to any risk across the enterprise: capitalization, human capital, regulatory, all security risks, etc.  The ERM practitioner will use the risk principles to guide their practice.  The risk principles in practice doesn’t describe the scope to be focused on, it simply is the guiding principles the ERM practitioner will work within.  The focus on enterprise risks defines the ERM designation.

The difference –  ESRM defines the scope of focus on security risks and uses risk principles to define and guide the security practitioner in managing the security scope of risks.  Let’s break this down into two parts.  First, ESRM is narrowly scoped and focused on security risks.  It doesn’t matter if you’re focused on physical security, cyber, information, terror, or workplace violence.  It also doesn’t matter what discipline you’re practicing or at what level; a CSO, or line employee installing cameras or assigning online credentials.  It defines scope in broad terms and can be as narrowed as needed so long as the scope remains within the realm of security risks.

Secondly, ESRM defines the security practice through globally accepted risk principles, not unlike ERM.  Similar to how the risk principles would define the role of the ERM practitioner, the risk principles define the role of the security practitioner.  There are many different tasks an ERM practitioner engages in that a security practitioner wouldn’t and vise-versa.  For example, the actual implementation of any security program or the nuances of a program: conducting an investigation, implementing an identity management system, or assessing a workplace violence threat.  It would be silly to compare the two by task, so we’re not going there.  Clearly the role of an ERM practitioner and security practitioner is different.  You can see though, that a large part of ESRM is about how any security practitioner would be guided through their practice using risk principles.

As a recap: ERM defines the scope and how to practice.  ESRM defines the scope and how to practice.  The scope and practice in each are different in many ways.  The differences make them…well, different.

They are absolutely complimentary to each other.  Enterprises need both.

I had a point of clarity around the similarities and differences of ESRM and ERM in my experience as a CSO that I found very satisfying.  I was working with our ERM lead and we were working through all the main risks and risk owners: legal, finance, technology, security, operations, etc.  He kept trying to find a placeholder for us to own the security risks, because of course we managed security risks, amongst other risks, all over the enterprise.  At the point of this discussion, our security group was thoroughly practicing ESRM and it was clear we didn’t own any of the risks.  Yet it seemed to the ERM group that we needed to own something, like security risks, because we were managing the entire security program; all the physical security, cybersecurity governance risks, business continuity, investigations, fraud management, etc.  Our role wasn’t to own the risks.  We were however, managing the security risks, using ESRM principles that defined our scope and discipline – the asset owners and stakeholders owned the risks.

At the end of the day, the ERM group and our security group had distinctly different roles and along the way they learned what our role actually was.  It was to manage the asset owners and stakeholders through their security risks, being the subject matter experts along the way.  We practiced our disciplines in many of the same ways, but we were very different in our scope and actual day to day practice.  I’d say we were cousins.

Applying risk management to security is certainly not new or novel; in fact, it’s kind of old hat to think it’s new.  We’ve just been stuck in a rut.  Some of that rut is not finding distinctions with ESRM and ERM.  I make no excuses – I used to write ESRM vs ERM.  I was wrong.  Happily, this conversation has matured and we’re making needed progress developing our security discipline.  As discussed in prior blogs, it’s going to be necessary that we continually challenge our thoughts on how we’ve perceived ourselves and our role in the past.  One final thought worth repeating…ESRM and ERM are similar in name, and of the same family, but there are significant differences.  So, let’s keep some separation between the two.

I look forward to your comments and continuing the conversation!

Introducing Enterprise Security Risk Management (ESRM)

Introducing Enterprise Security Risk Management (ESRM)
Written by Brian J. Allen
In the course of a security career that now stretches back decades, I’ve spoken with hundreds and hundreds of security practitioners. They were people in very different roles, with very different backgrounds, and at very different stages in their careers — everyone from chief security officers (CSOs) at Fortune 500 companies, to cybersecurity experts, to retired police officers managing physical security at manufacturing plants and warehouses. I’ve heard them talk about their experiences, their best practices, their satisfactions and their frustrations. I’ve learned something valuable from my conversations with every single one of those people, and I’ve distilled those lessons into a new, comprehensive approach to the theory and practice of security, called Enterprise Security Risk Management (ESRM).
I believe ESRM has the potential to completely transform the practice of security. ESRM principles can change the way we do our jobs, the way we see our roles and the way others see them, and the ways we protect our enterprises, their assets, and their employees. And ESRM can help us in our careers, by increasing our personal and professional satisfaction and by ensuring that security is seen — as it deserves to be — as a professional discipline.
I believe so deeply in ESRM that along with my longtime colleague Rachelle Loyear, I’ve written a book about it: Enterprise Security Risk Management: Concepts and Application, to be published by Rothstein Publishing in October. It’s why I speak about ESRM at industry conferences, offer presentations about it to boards of directors and senior executives, and write about it in industry publications. And it’s why I’ve created this blog, to act as a resource for security practitioners who want to advance the practice of security, to advance the way security is perceived and — of course — to advance their careers.
So what is ESRM, exactly?
ESRM is the practice of managing a security program through the use of risk principles. It’s a philosophy of management that can be applied to any area of security and any task that is performed by security, such as physical security, cybersecurity, information security, business continuity management and investigations.
Now, there’s nothing exactly new about any of the specific components that make up that definition. ESRM is based on long-established, internationally recognized risk management concepts and principles. But in the real world, those concepts and principles are almost never applied across the entire enterprise, comprehensively and holistically, to every aspect of the enterprise that’s impacted by security — which, as we all know, means every aspect of the enterprise. That’s what ESRM is designed to do.
ESRM changes the security function completely – transforming it from a set of tasks to a role.
When ESRM principles are applied, the security function changes completely — from a set of tasks, performed discretely, to a role. It’s no longer about checking IDs at entrance gates, or installing antivirus software, or trying to keep employees from stealing from retails stores. That doesn’t mean those functions aren’t important anymore. But it does mean that when they’re performed, they’re performed for a reason. ESRM means security decisions are made by the right person, with the right authority and accountability, and for the right reasons — reasons based on defined risk principles.
What does this mean in practice? In its simplest terms, it means that instead of just “doing security” the way we always have, we first ask ourselves some fundamental, and fundamentally important, questions. Here are a few of the most basic:
“What’s the asset we need to protect?”
“What’s the risk associated with that asset?”
“Who’s responsible for that risk?”
“How should we mitigate the risk, and how should we respond if the risk becomes a reality?”
Once we start asking ourselves, and others, those questions, the discrete security tasks we’ve been performing begin to make sense as part of a comprehensive security and risk management framework. We’re no longer just making sure the gates of the assembly plant are secure. We’re working toward an understanding of why they need to be kept secure, what’s inside the plant that needs to be protected, who will be impacted if our security measures fail, and what additional or different measures we might need to take. In other words, we know why we’re doing what we do, and that means we can do it better — a lot better.
Whatever your current role, whatever kind of enterprise you work for, wherever you want your career to take you, there are certain things I’m sure you want. You want to be able to do your job to the best of your abilities. You want to be seen as a problem-solver, not somebody who keeps other people from doing their jobs. You want to be seen as a partner by your peers in the business. And, of course, you want to be taken seriously as a professional, and you want security to be taken seriously as a profession.
ESRM is the key achieving all these goals. In upcoming blog posts, I’ll be talking in far more detail about exactly who can benefit from ESRM principles, and how. But for now, I’ll leave you with a very simple, very important message: It’s not just the security practitioner. Yes, ESRM offers a path to personal and professional satisfaction to security professionals of all kinds. But it can help your business partners in the enterprise. Just a few examples: the plant manager working to keep the supply chain up and running, the HR personnel trying to make sure the work environment is safe, and the corporate communications professional worrying about the enterprise’s reputation in the community.
Who can benefit from ESRM? Everyone.
The reality is, ESRM can benefit everyone, in every role, in every industry. And that’s why I’ve started this blog, to serve as an ESRM resource, and to maintain an ongoing dialogue about ESRM principles and practices. I hope to hear from you, and learn from you, soon.