Almost a year after it initially published its White Paper on AI Regulation (the “AI White Paper”) and launched its associated consultation, the UK government has published its consultation response (the “AI Response Paper”).

The AI Response Paper confirms the current government’s intention to take a “pro-innovation” approach to AI regulation, leaving individual regulators to supervise the use of AI within their respective areas using their existing regulatory toolkits, in accordance with five key AI principles. The government and regulators will only supplement existing frameworks with new AI-specific requirements if there is a clear lacuna that can be effectively filled by a new measure. The UK’s approach is therefore an important reminder that while there is often significant public attention on new AI laws, such as the EU AI Act, the use of AI is already subject to a wide variety of existing legal requirements under other technology-neutral laws, including data protection and equalities legislation.

Businesses with UK operations should be mindful of, and monitor for further information on, the approaches of relevant UK regulators as they look to design robust cross-border, multi-jurisdictional AI governance programmes.

The UK’s AI Strategy So Far

AI development and innovation is a focal point for the UK, and has been a prominent feature in the government’s post-Brexit political rhetoric. Following the publication of the UK’s National AI Strategy in September 2021, in March 2023 the UK government published its AI White Paper setting out the UK’s proposed “pragmatic, proportionate regulatory approach” to AI. It has since been soliciting comments from a range of stakeholders.

Amid concerns over the UK government’s seemingly slow start, in November 2023 a Member of Parliament introduced a Private Member’s Bill – the Artificial Intelligence (Regulation) Bill – in an attempt to jump-start the regulatory discussions. The five-page legislative proposal seeks to create a single UK AI regulatory body and legally require UK businesses to comply with various key principles when using AI. Private Member’s Bills are very rarely passed, and it is unlikely that it will command sufficient support to become law.

UK AI White Paper Response: A Regulator-Driven Approach

The AI Response Paper reconfirms the current UK government’s regulator- and principles-driven approach to AI regulation, while noting that the UK’s strategy remains a work in progress that will evolve as the UK “take[s its] time to get this right”. Notably, the next UK General Election will take place before the end of January 2025 and, depending on the outcome, a new government could adopt an entirely different AI strategy.

For the time being though, five key takeaways are:

  1. Individual UK regulators will oversee AI within their respective domains. The current UK government believes that it is more effective to focus on how AI is used within a specific context, rather than to regulate the underlying technologies, as the level of AI-associated risk will be determined by where and how it is used. Consequently, there are (currently) no plans for a UK EU AI Act equivalent. Instead, the UK will place a duty on existing regulators to oversee AI within their domains in accordance with five guiding principles (see below). UK regulators will be encouraged to rely initially on their existing powers, and only look to introduce new requirements if there is a clear lacuna in their current regulatory toolkit and if the regulator if is confident it has identified an effective and proportionate measure to plug the gap.

To facilitate increased visibility over how regulators will achieve this in practice, the government has requested 13 regulators – including the Information Commissioner’s Office, Bank of England, Financial Conduct Authority, Competition & Markets Authority, and Medicines & Healthcare Products Regulatory Agency – to outline their strategic approach to AI regulation by 30 April 2024.

  1. A central government AI function will support all UK regulators. The new AI function will coordinate, monitor and adapt AI governance programmes across all regulators to ensure cohesion and consistency. Further, the UK will establish a steering committee by spring 2024 that will support knowledge exchange and AI governance coordination between regulators. The government also announced a £10 million technical-upskilling package for regulators to develop their tools and capabilities so they can understand, adapt and respond to AI risk. Together, these measures are intended to ensure that UK regulators have the necessary knowledge and support to effectively regulate AI in context.
  2. Five key AI principles will underpin UK regulators’ approaches. UK regulators will adhere to five key cross-sectoral principles: (1) safety, security and robustness; (2) appropriate transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress. These broadly align with existing frameworks, such as the NIST AI Risk Management Framework, the OECD’s AI Principles, and the S. AI Executive Order. These principles will initially be issued on a voluntary basis, with a view to regulators ultimately having a statutory duty “to have due regard to the principles”.

To facilitate the consistent application of the key principles across regulators, the UK government has also released initial guidance on how the principles could be interpreted and applied. Currently, the guidance only contains very high-level descriptions and considerations for each of the principles, with links to any associated ISO technical standards and existing guidance from other UK regulators. However, a more detailed version is expected to be published this summer following a consultation period.

  1. Specific General Purpose AI regulation appears to be incoming. The UK likely will be introducing specific laws for highly capable general purpose AI (“GPAI”), or models that can perform a wide variety of tasks that have a specific magnitude of computing power and have high capabilities in certain high-risk areas. The UK is not the first country to grapple with GPAI regulation; the EU AI Act and the U.S. AI Executive Order both contain additional requirements for these systems.

The UK is considering how best to address these risks proportionately without unduly hindering AI innovation, including assessing the supply chain level that the regulatory measures should most appropriately attach to. The UK is working with a small number of large GPAI providers on a voluntary basis to better understand what risks GPAI presents and to fine-tune their regulatory approach. The UK will look to codify its approach via legally-binding measures that align with those of other jurisdictions. This may include transparency obligations, specific risk-management and corporate governance obligations, and requirements to address certain potential harms such as unfair bias and AI system misuse.

  1. International alignment is top of mind. While seeking to be a key player in the international development of AI regulation, the UK government notes that it will collaborate closely with various international partners, including the U.S. and Singapore, to help ensure it regulates AI effectively and consistently while enabling UK developers to remain competitive. Given that various countries have already taken differing approaches to AI regulation, it remains to be seen how the UK will achieve international alignment in practice.

How to Prepare

Businesses with operations in the UK should be mindful of any differences in the UK’s proposed regulatory approach as they look to design robust cross-border, multi-jurisdictional AI governance programmes. However, it is unlikely that any immediate next steps are required at this stage. While businesses should ensure that their AI governance programmes address the five key principles that will underpin UK regulators’ approaches, given they overlap with the principles in several other key AI laws, and reflect “good AI governance” considerations more broadly, it is likely that many international businesses’ programmes will already address these principles.

Nonetheless, businesses should continue to monitor for relevant UK regulators publishing their strategic approaches to AI, as well as any updated UK AI guidance, and modify their governance programmes accordingly in light of any additional details on how these principles will be applied in practice.

*****

To subscribe to the Data Blog, please click here.

The cover art used in this blog post was generated by DALL-E.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Matthew Kelly is a litigation counsel based in the firm’s New York office and a member of the Data Strategy & Security Group. His practice focuses on advising the firm’s growing number of clients on matters related to AI governance, compliance and risk management, and on data privacy. He can be reached at makelly@debevoise.com

Author

Robert Maddox is International Counsel and a member of Debevoise & Plimpton LLP’s Data Strategy & Security practice and White Collar & Regulatory Defense Group in London. His work focuses on cybersecurity incident preparation and response, data protection and strategy, internal investigations, compliance reviews, and regulatory defense. In 2021, Robert was named to Global Data Review’s “40 Under 40”. He is described as “a rising star” in cyber law by The Legal 500 US (2022). He can be reached at rmaddox@debevoise.com.

Author

Martha Hirst is an associate in Debevoise's Litigation Department based in the London office. She is a member of the firm’s White Collar & Regulatory Defense Group, and the Data Strategy & Security practice. She can be reached at mhirst@debevoise.com.

Author

Scott Morrison is a litigation associate (solicitor and barrister) in the Litigation Department and International Dispute Resolution Group. Dr. Morrison represents individual and corporate clients across the firm’s litigation and international arbitration practices. Before joining Debevoise, Dr. Morrison was a professor of international politics and an academic lawyer.