As AI adoption continues to increase, businesses are looking for familiar risk management protocols for AI governance.  One obvious governance framework to use is cybersecurity, which is another area where rapid technological change has required businesses to quickly adapt to complex challenges.

Because of the similarities between cybersecurity and AI risk (e.g., both are relatively new to many businesses, tech-driven, and require a significant investment in software, policies, and human resources to address effectively), the last 10 years of cybersecurity risk management provide many valuable lessons that businesses can use when building out their AI governance and compliance programs.  There are, however, several aspects of AI that are not analogous to cybersecurity.  In this Debevoise Data Blog post, we discuss where lessons learned from cybersecurity governance are readily applicable to AI risk management, and where they are not.

Lessons from Cyber that Apply to AI Risk Management

Pursue Risk Mitigation, Not Risk Elimination. 

Both AI and cyber present risks to businesses that cannot be fully mitigated.  To operate a business in 2025 means being connected to the Internet at thousands—if not millions—of endpoints, which also means accepting some significant level of cyber risk.  The only way to reduce that risk to zero is to shut down the business entirely or run it using only pencils and paper, which is not a viable option.  Soon, the same will be true for AI: it will not be optional for most businesses, and therefore some level of risk will be necessary.  The challenge in both cases is to balance the risks and benefits to the business and ensure that the benefits are worth the risks, and that the business is only taking necessary or acceptable risks and avoiding unnecessary or unacceptable risks.

Don’t Get Over-Teched. 

For both AI and cyber, executives may be unfamiliar with the underlying technology and have little experience with the downside risks.  These executives may therefore over-rely on technical staff or outside technical experts to manage these risks.  But these experts alone often do not have sufficient context and visibility into corporate operations and priorities to effectively manage these risks.  Accordingly, for both cyber and AI, a broader enterprise approach to risk management is needed, with non-technical leaders directly involved in governance and compliance, working alongside technical staff and vendors.

Address Operational Risks First, Then Regulatory Risks. 

Cyber and AI share overlapping operational and regulatory risks.  In the early days of enterprise cybersecurity, businesses faced vague regulatory requirements and an uncertain enforcement landscape.  They found that the most effective approach was to prioritize operational cybersecurity risks (i.e., to protect the network and confidential data from unauthorized access), and once that goal was largely achieved, they would then gap assess their program to any applicable regulatory compliance standards.  This approach provides a good lesson for AI governance.  Given the still-evolving AI regulatory landscape, businesses should prioritize managing operational risks by ensuring that AI tools and use cases deliver value, perform as intended, and do not cause unexpected harm.  Once a business effectively mitigates operational risks, the business can then turn its attention to conducting a gap assessment against applicable regulatory requirements.

Effort Can Matter as Much as Results. 

Reducing operational risks is important, even if it is not successful in avoiding harms.  Because of the complexity and novelty involved in both AI and cybersecurity, efforts and outcomes may not be directly correlated.  One business can have poor cybersecurity and not experience a cyberattack (or perhaps not know that it has experienced an attack), while another business could have a world-class cyber program and still fall victim to a successful hack.

Many regulators—who themselves have experienced significant cybersecurity incidents—understand this dynamic and generally do not judge businesses that have experienced breaches by the outcomes, but rather, by their efforts.  For example, a business that does not require strong passwords or multifactor authentication can fall into regulatory trouble from the NYDFS, even if there isn’t a hack, and a business that experiences a major cyber incident may not face much regulatory scrutiny if it can show that it did everything reasonably possible to prepare for and respond to the attack.  Soon, this will likely also be true for AI.  Businesses that have strong documented AI governance and compliance programs will face less regulatory scrutiny than those that do not, regardless of actual AI failures.

Where AI Is Different from Cyber

AI Presents More Upside. 

Cybersecurity sits squarely on the cost side of the business ledger.  Important as it is, the primary benefit of good cybersecurity is avoiding harm, not generating revenue.  By contrast, AI has substantial upside potential for businesses through enhanced efficiencies and new business opportunities, which makes balancing risk and reward more complicated than it is with cyber.  Good cybersecurity will look very similar among businesses that are roughly the same size in the same sector.  Good AI governance, however, may vary dramatically among peer firms depending on the level of AI adoption and its potential upside for each particular business.

AI Risks Are More Varied (And AI Vendor Risk Management Is Complicated).  

Risks and technology for cybersecurity are very similar from business to business, whereas AI tools and use cases can vary considerably between (and often within) businesses.  Addressing the risks associated with an AI-enabled customer service chatbot is a very different exercise than it is for an AI resume screening tool, which is very different from using an AI image-generator for marketing materials.  Cybersecurity is one of the primary risks associated with AI adoption, but AI also presents risks relating to matters such as privacy, IP, bias, transparency, conflicts, quality control, and loss of skills, which makes AI risk much harder to manage because it requires engagement from people with a variety of skill sets and experiences working together.

The greater breadth of AI risk compared to cyber risk also has implications for third-party/vendor risk management.  Although it is a very difficult task, managing the cybersecurity risk of vendors who have access to a business’s systems or data is a process that can, to some extent, be standardized.  Because of the higher level of variability for the risks associated with AI tools, use cases, data, and users, assessing and mitigating the risks posed by vendors who supply a business with AI solutions (or who use AI on their own platform with confidential data from the business) requires a much more complex program for identifying high-risk AI vendors and effectively addressing the associated risks.

Accountability for AI Risk Management Is Often Harder to Assign.

Most businesses have a group that was hired and trained specifically to focus on managing cyber risk, with a single designated person in charge, usually the CISO, who is accountable to senior management.  By contrast, very few businesses have a single individual who is responsible for managing AI risk.  The cyber part of the AI risk management function naturally rests with the CISO, but other aspects of AI risk management may reside with the general counsel, head of risk, CCO, COO, CFO, and/or Head of HR, which is why many businesses have to establish a cross-functional AI Governance Committee that is collectively accountable for managing AI risk.

“Bad” AI Can Go Undetected for Long Periods.  

Almost every major business is under constant cyberattack.  As a result, their cybersecurity practices are tested every day, and very poor controls will likely result in short-term harm (i.e., cybersecurity incidents), leading to the deployment of additional resources to enhance cybersecurity and limit successful attacks.

By contrast, poor use of AI can persist for a long time before it is discovered and addressed.  Take, for example, a poorly designed AI resume screening tool that devalues candidates with gaps on their resume in a way that unfairly screens out women who took time off for childcare and veterans who were injured and needed time to recover.  Such a deficiency can persist for months or years, accumulating risk and causing harm without detection.  The possibility for delayed awareness of AI risks underscores the importance of instilling robust AI risk management controls, including risk assessment, pilot programs, and ongoing monitoring of AI use cases in production.

The authors would like to thank Debevoise Law Clerk Achutha Raman for his contribution to this blog post.

***

To subscribe to the Data Blog, please click here.

The cover art used in this blog post was generated by DALL-E.

Author

Charu A. Chandrasekhar is a litigation partner based in the New York office and a member of the firm’s White Collar & Regulatory Defense and Data Strategy & Security Groups. Her practice focuses on securities enforcement and government investigations defense and cybersecurity regulatory counseling and defense. Charu can be reached at cchandra@debevoise.com.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Matthew Kelly is a litigation counsel based in the firm’s New York office and a member of the Data Strategy & Security Group. His practice focuses on advising the firm’s growing number of clients on matters related to AI governance, compliance and risk management, and on data privacy. He can be reached at makelly@debevoise.com

Author

Andreas Constantine Pavlou is an associate in the Litigation Department.