As companies slowly ramp up the depth and breadth of their AI adoption, one of the most difficult challenges they face is managing third-party risk. Most companies contemplating AI adoption will look to third-party vendors to provide AI-enabled products or services for their businesses. Companies often struggle when deciding what diligence to perform for these vendors and how to mitigate – through contractual conditions or other means – the risks identified in the diligence process.

This Debevoise Data Blog post surveys key challenges associated with AI vendor risk management and provides tips for designing an effective AI vendor risk management program.

Challenges of AI Vendor Risk Management

An effective, risk-based third-party AI risk management program allows companies to more efficiently identify, assess, and mitigate risks associated with AI vendors and to determine which highest-risk vendors require enhanced scrutiny. Based on our experience helping clients in this area, the most common challenges encountered by firms seeking to implement an AI vendor risk management program stem from the following questions:

  1. What Kinds of Vendors Are Covered? – Will the program apply only to vendors who provide AI models for direct use by the company? Or will it also cover providers of software products that incorporate AI-enabled features but do not allow users any control of the underlying model(s)? Will it cover vendors who may leverage AI on their own systems to provide goods and services to the company, even if the company has no interaction with those systems?
  2. What Counts as AI? – A related question to the one above is how the firm wants to define “AI” for purposes of its AI vendor risk management program. Will the program apply only to circumstances involving generative AI, or will it cover a broader range of machine-learning technologies? Are there any models that – even if they do not technically leverage AI – might still present similar enough risks that they deserve comparable treatment (as may be the case with complex algorithms that do not involve machine learning but do make important decisions or otherwise present significant reputational or regulatory risk)?
  3. How Will the Program Interact with Cybersecurity and Data Privacy Diligence? – Many of the risks associated with AI vendors overlap with risks addressed through cybersecurity and data privacy diligence (e.g., maintaining confidentiality, access to data, sharing sensitive data with third parties, and deletion of data when it is no longer needed). Will AI and cyber diligence remain separate, with the goal of eliminating overlap, or can they be harmonized as part of a comprehensive technology diligence process?
  4. How Much Can the Program Be Standardized? – Many of the risks associated with AI vendors (for example, risks associated with intellectual property, confidentiality, cybersecurity, and quality control) may arise for a wide range of AI vendors. But other risks (such as the risk of unfair discrimination) may arise only for certain tools in the context of certain use cases. This may make it difficult to have off-the-shelf, universal templates for managing AI vendor risk – such as a single set of diligence questions or model contract provisions. As a result, it can be a challenge to design a program that reliably addresses all significant AI vendor risks but that is also scalable and reasonable in terms of resource demands and that does not lead to significant delays in onboarding low-risk AI tools or services (which can result in circumvention of controls or so-called “shadow IT” risks).
  5. How to Manage the Risks of New Features for Existing Services? – Many vendor AI services are part of existing software packages with existing contractual agreements. It can be a challenge to determine when the release of new AI features should be treated as a new procurement or engagement that requires renewing or revisiting the diligence and risk management analysis. This is made particularly complicated by the fact that vendor updates are not timed to coincide with contract cycles. As a result, even if a company does choose to revisit the risk analysis for a particular vendor, it can be difficult or even impossible to act on the results of the analysis in the middle of an ongoing engagement with a pre-defined term of service.
  6. How to Distinguish between Tool Risk and Use Case Risk? – Some AI tools are purpose built for specific use cases that will involve specific risks that should be addressed in the vendor onboarding process (e.g., regulatory compliance for a resume screening tool). But other tools are more general purpose, and the associated risks are dependent on the particular use cases such that they may only be identified and addressed after the tool has been onboarded (e.g., addressing regulatory requirements associated with using GPT Enterprise to extract data for generating financial research reports). Likewise, there are some risks (such as risks related to accuracy and reliability) that require longer study and use to fully understand and mitigate, making them hard to address prior to engagement. It is important (and tricky) therefore to sort out which risks are better addressed through the vendor risk management process and which are better left to be mitigated through internal ongoing AI governance.

Tips for Managing Identified AI Vendor Risks

Depending on the answers to the questions above, companies should consider whether their existing third-party risk management structures are sufficient (in terms of resources, expertise, scope, and mandate) to assess the unique risks presented by AI vendors. They also should consider the range of procedural, contractual, technical, and other mitigants that may be deployed to mitigate the risks identified during their vendor risk management process.

Some tips and tools to consider when trying to identify effective mitigants include:

  1. Internal Diligence – As part of performing diligence of the AI vendor itself, consider conducting internal diligence as to why the vendor’s services – and, specifically, their AI-enabled products or services – are necessary. What are the intended use cases? What data will be used? Will there be a pilot program? What are the best-case and worst-case scenarios? Most importantly, how will the company determine success with respect to the engagement?
  2. List of Risks, Diligence, and Terms – Consider creating a checklist of potential risks that the company will contemplate when engaging an AI vendor. For each risk that can be addressed through contract, consider whether it is possible to have a playbook with model diligence questions, ideal contract terms, and acceptable fallback terms. Consider also organizing these risks into standard risks (i.e., those that will be addressed for all AI vendor engagements) and nonstandard risks (i.e., those that will only need to be addressed in specific contexts) and identify which risks are covered by other diligence efforts (cyber, privacy, etc.) as opposed to those risks that are only addressed through AI-specific diligence. Finally, consider whether there are any risks (e.g., regulatory compliance with hiring, lending, or biometric laws) that will require review and sign off from specific subject-matter experts, such as the legal team, compliance staff, or HR.
  3. Identifying Non-Contractual Mitigants – In certain circumstances, companies may decide to move forward with an AI vendor, even if there are identified risks that have not been (or cannot be) fully mitigated via contract or through diligence. For these residual risks, companies should consider whether there are non-contractual measures (including technical or operational measures) that can be implemented at the use-case stage as further mitigants. For example, to minimize risk associated with allowing an AI vendor to process sensitive data using an AI system, companies may want to consider functional means of preventing such data from being exposed to the vendor in the first instance. Or, to address business continuity risks associated with key vendor-supported AI systems, companies may want to consider creating business continuity plans featuring backups or workarounds that will allow them to meet obligations in the event of a vendor disruption.
  4. Periodic Review – Where third-party tools are being used for low-risk use cases with limited oversight, consider periodic spot checking to confirm that the use cases remain low risk.

***

To subscribe to the Data Blog, please click here.

The Debevoise AI Regulatory Tracker (DART) is now available for clients to help them quickly assess and comply with their current and anticipated AI-related legal obligations, including municipal, state, federal, and international requirements.

The cover art used in this blog post was generated by DALL-E.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Matthew Kelly is a litigation counsel based in the firm’s New York office and a member of the Data Strategy & Security Group. His practice focuses on advising the firm’s growing number of clients on matters related to AI governance, compliance and risk management, and on data privacy. He can be reached at makelly@debevoise.com

Author

Johanna Skrzypczyk (pronounced “Scrip-zik”) is a counsel in the Data Strategy and Security practice of Debevoise & Plimpton LLP. Her practice focuses on advising AI matters and privacy-oriented work, particularly related to the California Consumer Privacy Act. She can be reached at jnskrzypczyk@debevoise.com.

Author

Tigist Kassahun is a corporate counsel in the Intellectual Property and Technology Transactions Group, as well as a frequent collaborator with the firm’s Data Strategy & Security practice. She can be reached at tkassahu@debevoise.com.

Author

Jarrett Lewis is an associate and a member of the Data Strategy and Security Group. He can be reached at jxlewis@debevoise.com.

Author

Josh Goland is an associate in the Litigation Department.