On 26 October 2023, the Bank of England, Prudential Regulation Authority (“PRA”) and Financial Conduct Authority (“FCA”, collectively the “UK Financial Authorities”) published FS2/23 on Artificial Intelligence and Machine Learning (the “Response Paper”). It summarises participants’ responses to the October 2022 AI discussion paper (DP5/22, the “Discussion Paper”), which outlined the UK Financial Authorities’ proposed approach to AI regulation.

The UK’s Approach to AI Regulation

As set out in the government’s white paper on AI, the UK, unlike the EU, does not intend to implement AI-specific laws or regulations. Rather, the government plans to issue non-statutory guiding principles to which existing UK regulators can adapt, and implement within, their respective sectors. The UK Financial Authorities are, therefore, amongst the forerunners in establishing what their AI regulatory approach may look like.

The Response Paper does not represent the UK Financial Authorities’ views, nor include any specific policy proposals; it is a summary of industry feedback on their proposals. However, it does give an indication of how the UK Financial Authorities may approach AI regulation in the future.

Seven Takeaways from the Response Paper

  1. The definition of AI is a key gating question. The definition of “artificial intelligence” has been a contentious point in multiple AI legislative processes – including the draft EU AI Act – and it appears that the UK Financial Authorities could face similar challenges. The Discussion Paper gave a potential AI definition of “the theory and development of computer systems able to perform tasks which previously required human intelligence”. Firms were asked to comment on how the UK Financial Authorities should approach an AI definition, including whether they should pursue a financial services sector-specific definition.In the Response Paper, most respondents were not in favour of a sector-specific definition, and even suggested that the UK Financial Authorities could forego AI-specific regulation altogether. Respondents gave a range of reasons, including concerns that AI-specific regulation could become quickly outdated due to the pace of technology development, could easily be both too broad and too narrow in terms of the technology it intends to capture, and could create incentives for regulatory arbitrage. Instead, most respondents advocated for technology-neutral frameworks that adopt outcomes- and principles- based approaches, consistent with the UK Financial Authorities’ technology-neutral approach to other areas of regulation. It remains to be seen how this would operate in practice but, if it is adopted, certain AI tools would (presumably) still be subject to the regulation.
  2. Any regulation should be risk-based – but with potential divergences from the EU AI Act’s criteria for assessing risk. In the Discussion Paper, the UK Financial Authorities identified several non-exhaustive AI-related risks that could affect their area, including consumer protection, competition, financial safety and soundness, insurance policyholder protection, financial stability and market integrity. They solicited comments on which risks should be prioritised and how they should be evaluated. In the Response Paper, respondents generally agreed that AI regulation should be risk-focused, with a particular focus on consumer or the financial market risk. However, unlike the EU AI Act, which focuses on impacts to individuals, some respondents suggested that the UK Financial Regulators may want to include different or additional risks, such as financial stability. This could impact financial firms who are considering using the EU AI Act as their ‘high watermark’ for AI regulatory and governance compliance, who will have to accommodate any UK-specific requirements in their compliance programmes.
  3. Cross-functional oversight of AI tools and use-cases – by a team with sufficient expertise to identify and mitigate risks – is an important aspect of effective AI governance. The Discussion Paper highlighted the importance of good governance in effectively identifying and managing risks stemming from AI tools and use cases. The Response Paper shows a divergence in views on how this should be achieved in practice. Some respondents thought that existing governance structures are sufficient to cover AI, while others advocated for the adoption of specific AI oversight committees (either at a central or local business area level). Most respondents did not favour creating an AI-specific prescribed responsibility for a Senior Management Function, but acknowledged that some form of board or senior management-level oversight of AI is necessary. Nonetheless, respondents generally agreed that the team responsible for AI oversight needs to have sufficient expertise to spot or address new forms of AI systemic risks. For example, if an AI tool ceases to function, being able to quickly assess whether that is due to an (active) cybersecurity incident.
  4. AI Regulation should include oversight of third-party providers. The Discussion Paper notes that a key challenge for firms is their ability to monitor the AI-related operations and associated risks of their third parties. This challenge is particularly acute given that many financial firms are either completely leveraging, or developing their own products on the back of, existing AI tools from external providers. The Response Paper explores, at a high level, different options for managing these risks. Some respondents suggested that third party providers should be required to provide certain information to firms regarding their AI tools – including evidence of responsible development and risk information – so firms can better understand and mitigate the associated risks. It is unclear how this would be achieved in practice, especially given that many AI tool developers likely fall outside the UK Financial Regulators’ regulatory purview. One possibility is for the UK Financial Regulators to introduce standardized AI due diligence requirements that firms must satisfy before they can adopt third-party tools.
  5. There is strong appetite for any future regulations to align with existing domestic and international laws and regulations. The Response Paper strongly advocates that any future AI regulation be consistent, and not unnecessarily overlap, with existing domestic laws (including the Equality Act 2010) and financial services regulations (including operational resilience and third-party risk management requirements). There was also considerable support in the Response Paper for consistency with other international AI laws (such as the EU AI Act and the NIST AI Risk Management Framework), in particular as any divergences could undermine the UK’s competitiveness. This point will likely be of interest to the FCA and PRA, given their new secondary statutory objective to support the UK’s growth and international competitiveness.
  6. Some view the (UK) GDPR as creating particular challenges for AI adoption. The Discussion Paper requested feedback on whether there are “any regulatory barriers to the safe and responsible adoption of AI in UK financial services”. In the Response Paper, a number of respondents flagged the UK GDPR as, in their opinion, creating particular challenges. The complexities in achieving GDPR compliance when adopting AI tools is not a new topic – see our blog post. For example, the French data protection authority, the CNIL, recently published a series of fact sheets aimed at helping companies achieve GDPR-compliance when developing and adopting AI tools. The UK Financial Authorities could look to such existing resources when developing future guidance.
  7. Regulations should not just focus on the impact of financial firms using AI, but how AI use can impact financial firms. In the Response Paper, several respondents also flagged that the UK Financial Regulators could consider how the use of AI tools by malicious actors could impact financial firms. This includes AI’s potential use as a tool for fraud and money laundering (e.g., through deepfakes), and cyber-attacks (e.g., using generative AI to generate more realistic phishing emails). While these are not new risks for financial firms – banks have long been required to employ robust anti-fraud measures, for example – wide-spread access to AI tools could result in a proliferation of such attacks, which financial firms need to be equipped to deal with.

What Firms Can Do Now

While there is still uncertainty over the content of future AI regulation from the UK Financial Regulators, it is clear that further regulation in this area is coming. Given the potential difficulties in trying to retrospectively fit AI governance restrictions into organisations and their AI tools, there are several hallmarks of good AI governance that firms can start implementing now.

For example, firms should consider how they can create ongoing cross-functional oversight over AI. Typically, companies start by creating a committee (which could be AI-specific) that will oversee and guide the company’s use of AI tools and use cases more broadly, including through vendors.

Firms may then consider developing policies and procedures relating to the AI Committee’s oversight of the firm’s AI framework. This may include: creating an inventory of the AI tools the firm has access to and its use cases in production; developing a risk-rating framework for its AI tools and use-cases; and determining how the firm will identify and mitigate high-risk AI use cases.

In addition, many components of the coming AI governance obligations could require firms to significantly increase their compliance budgets and secure additional resources, which some firms may want to address now as 2024 budgets are being considered.

Finally, firms should also consider how AI may impact their operational resilience, and update their business continuity plans to account for any novel or increased disruptions that could be caused by the firms’ increased reliance on AI for core business functions. For firms also regulated in the EU, this may be particularly important given the requirements in the EU Digital Operational Resilience Act, which comes into force in January 2025.

*****

To subscribe to the Data Blog, please click here.

The cover art used in this blog post was generated by DALL-E.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Robert Maddox is International Counsel and a member of Debevoise & Plimpton LLP’s Data Strategy & Security practice and White Collar & Regulatory Defense Group in London. His work focuses on cybersecurity incident preparation and response, data protection and strategy, internal investigations, compliance reviews, and regulatory defense. In 2021, Robert was named to Global Data Review’s “40 Under 40”. He is described as “a rising star” in cyber law by The Legal 500 US (2022). He can be reached at rmaddox@debevoise.com.

Author

Benjamin Lyon is international counsel in the London office Corporate Department and a member of the firm’s Financial Institutions Group. He focuses on corporate transactions in the insurance and asset management industries, including mergers & acquisitions, regulatory matters and corporate governance, as well as all other aspects of corporate law,

Author

Clare Swirski is an international consultant in the firm’s London office. Her practice focuses on advising insurers and other financial institutions on a range of transactional and regulatory matters, including share and business acquisitions, joint ventures, reinsurance, longevity transactions, distribution agreements and group reorganisations.

Author

Martha Hirst is an associate in Debevoise's Litigation Department based in the London office. She is a member of the firm’s White Collar & Regulatory Defense Group, and the Data Strategy & Security practice. She can be reached at mhirst@debevoise.com.

Author

Ryan Fincham is a trainee associate in the Debevoise London office.