Reproduced with permission. Published Sept. 10, 2020. Copyright 2020 The Bureau of National Affairs, Inc. 800-372-1033. For further use, please visit http://www.bna.com/copyright-permission-request/

There’s been dramatic growth in the role lawyers play in cybersecurity. When we started practicing in the area of artificial intelligence, we heard many of the same questions that we faced about cybersecurity years ago: What do the lawyers do, and why wouldn’t companies just hire technical specialists?

The recent explosion of cybersecurity incidents, enforcement actions, and civil suits provides easy answers as to why companies need cybersecurity lawyers. In addition to representing companies in data breaches, lawyers play a critical role in the non-technical aspects of cybersecurity: helping companies develop and implement their policies, procedures, and governance to meet regulatory requirements—both existing and anticipated.

Over the next few years, there likely will be a similar explosion of AI incidents, regulations, investigations, and civil suits. In anticipation, we have identified seven ways the role of lawyers in AI will come to resemble cybersecurity legal work.

A Risk-Based Approach for Compliance

Like in the early days of cybersecurity, AI regulatory requirements are emerging in the form of vague principles.

Just as companies have struggled with determining what is and is not “reasonable cybersecurity,” they will likely have a difficult time determining whether their AI programs meet nebulous obligations such as preventing bias, model testing and validation, governance, oversight and explainability.

Companies will need lawyers with experience in technology and emerging regulatory frameworks to guide them on where their AI programs face the most significant legal risks and how to mitigate them.

Policies, Procedures, Training

The uncertain application of AI law, however, will likely not prevent regulators from bringing enforcement actions when AI programs behave unexpectedly and cause damage.

Companies will be well-served to develop the kinds of policies, procedures, and training for AI that regulators expect for other high-risk areas of their business (e.g., anti-money laundering, anti-corruption, cybersecurity, etc.).

Governance

Many large companies have difficulty identifying all of the AI programs they currently have in operation or development because they are housed in separate business units, and many companies lack an overarching governance framework.

AI regulations will likely require that companies create a governance structure for AI that includes management and board oversight, similar to what they have done for cybersecurity and other risks that were viewed as substantial.

Tabletop Exercises

Regulators understand that even companies in regulatory compliance can still fall victim to a cyber attack. They therefore often focus on a company’s response to a cyber attack—looking for hallmarks of resilience, such as how quickly a company was able to detect the incident, assess its scope, and remediate the vulnerability. To improve resilience, companies have found it helpful to rehearse difficult incident response decisions, like when to contact law enforcement.

Mock drills expose weaknesses in a company’s incident response planning that can be remedied before a real event occurs. The same is likely to be true for AI. Following an AI failure, a company will face several difficult decisions, including how to assess the scope of the problem, who needs to be notified, and how best to minimize the likely reputational, regulatory, and civil losses—all of which get easier with practice.

M&A and Vendor Diligence

Cybersecurity diligence is an increasingly important part of both M&A transactions and vendor risk management, and we anticipate that AI diligence will follow a similar path.

Many companies’ AI tools were developed, or are being operated, by vendors. If those tools are found to be faulty, focus will turn to the company’s diligence of the vendor, and regulators will likely expect more than a passive receipt of general representations and warranties for high-risk vendor operations.

Accurate Public Statements and Securities Filings

As companies learned with cybersecurity and privacy, the desire to market the strength of AI compliance practices to the public can create significant legal and reputational risks if overstated.

Lawyers will need to help companies ensure the accuracy of public statements and regulatory filings about their AI programs, their associated risks, and the steps being taken to mitigate those risks.

Managing Incident Communications

In responding to a cyber event, in addition to preserving privilege, counsel plays a key role in managing internal and external lines of communications. Inaccurate, inconsistent, or simply ill-considered communications can be sources of significant liability.

In an AI-related crisis, lawyers will help ensure that internal statements are informed and professional. Externally, counsel will likely review communications with insurers, auditors, customers, regulators, and the public.

Lawyers will also be involved in the interactions with third parties, such as vendors and former employees, as well as federal or state law enforcement authorities.

The Future

In recent years, the role of lawyers in cybersecurity has greatly expanded. In the near future, counsel will play a similar role in helping companies address the risks associated with AI. Lawyers will advise companies on AI regulatory obligations, their riskiest AI applications, how to reduce those risks, and what to do when risks materialize.

There will be tough calls with little guidance. Lawyers will need to opine on whether a particular AI project requires additional compliance measures. And when an AI model is found to perform unexpectedly, lawyers will be consulted on whether it needs to be shut down, how much investigation is warranted, and whether to inform regulators. When advising on these difficult decisions, the lessons learned from cybersecurity will be a good place to start.

To subscribe to the Data Blog, please click here

The authors would like to thank Debevoise summer associate Eric Halliday for his contribution to this article.

Author

Luke Dembosky is a Debevoise litigation partner based in the firm’s Washington, D.C. office. He is Co-Chair of the firm’s Data Strategy & Security practice and a member of the White Collar & Regulatory Defense Group. His practice focuses on cybersecurity incident preparation and response, internal investigations, civil litigation and regulatory defense, as well as national security issues. He can be reached at ldembosky@debevoise.com.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Jim Pastore is a Debevoise litigation partner and a member of the firm’s Data Strategy & Security practice and Intellectual Property Litigation Group. He can be reached at jjpastore@debevoise.com.

Author

Anna R. Gressel is an associate and a member of the firm’s Data Strategy & Security Group and its FinTech and Technology practices. Her practice focuses on representing clients in regulatory investigations, supervisory examinations, and civil litigation related to artificial intelligence and other emerging technologies. Ms. Gressel has a deep knowledge of regulations, supervisory expectations, and industry best practices with respect to AI governance and compliance. She regularly advises boards and senior legal executives on governance, risk, and liability issues relating to AI, privacy, and data governance. She can be reached at argressel@debevoise.com.