With last week’s political deal in European Parliament to advance the European Union’s groundbreaking AI Act (the “EU AI Act”), Europe is one step closer to enacting the world’s first comprehensive AI regulatory framework. Yet while the EU is poised to become the first jurisdiction to take this step, other countries are not far behind. In recent months, the U.S., Canada, Brazil, and China have all introduced measures that illustrate their respective goals and approaches to regulating AI, with the AI regimes in Canada and Brazil appearing to be modeled substantially on the EU AI Act.
In this blog post, we provide an overview of these legislative developments, highlighting key similarities, differences and trends between each country’s approach as well as providing a few considerations for companies deploying significant AI systems.
NIST and Other U.S.-Based Regulatory Schemes
Federal Level
Nationally, the United States has a fragmented approach to AI regulation, in contrast to the more comprehensive EU AI Act. The regulation of AI in the United States is largely sectoral, carried out by various agencies with specific authority over banking, insurance, securities markets, criminal justice, employment, etc. Recently, the Consumer Financial Protection Bureau, Justice Department’s Civil Rights Division, Equal Employment Opportunity Commission, and Federal Trade Commission (“FTC”) issued a joint statement that “[e]xisting legal authorities apply to the use of automated systems” and that the respective agencies would “monitor the development and use of automated systems.” As an example of U.S. agencies leveraging existing regulatory authority, the FTC has been reminding companies in the U.S. recently that misleading or deceptive marketing about AI products, and the use of generative AI for deceptive or manipulative purposes, can be violations of the FTC Act. Other U.S. regulators have begun to issue guidance as to their expectations for AI governance on issues, including with respect to risk management, model testing and disclosures.
In an effort to provide more general guidance, the U.S. Department of Commerce’s National Institute of Standards and Technology (“NIST”) recently released its Artificial Intelligence Risk Management Framework 1.0 (“AI RMF”), a voluntary, flexible framework designed to guide entities of all sizes and sectors in their development and use of AI systems. NIST’s release comes in the wake of the White House’s Blueprint for an AI Bill of Rights, another nonbinding framework meant to guide the public and private sectors on the use of AI systems. The NIST’s AI RMF outlines four core functions that NIST deems key to developing and maintaining responsible AI systems: Govern, Map, Measure, and Manage.
State and Local Level
On the state level, we’ve previously discussed how the Colorado Division of Insurance’s (“CO DOI”) risk-based draft Algorithm and Predicative Model Governance Regulation imposes significant operational obligations on regulated entities, such as requiring organizations to identify governance principles for AI, create oversight by senior management and the Board, and formulate of a cross-functional AI governance committee. New York City’s Automated Employment Decision Tool Law (“NYC AEDT”) requires covered employers to perform annual independent bias audits and to post public summaries of those results. While these regulations provide greater clarity in the expectations and requirements of covered businesses, they have received substantial criticism from stakeholders for being overly prescriptive, which NIST’s AI RMF avoids by largely providing principles, rather than concrete requirements.
International Regulatory Regimes
Europe
- On April 21, 2021, the European Commission published the draft EU AI Act, which quickly became the world’s leading example of a comprehensive AI regulatory framework. For over two years, the draft EU AI Act has undergone debate and modifications as it made its way through the Council of the EU and European Parliament. Currently, the draft is scheduled for a key committee vote on May 11, 2023, with a plenary vote to likely occur sometime in June 2023. After the plenary vote, the Act will be subject to further amendment through the trilogue process.
- While the latest draft EU AI Act is not yet publicly available, early reports indicate it will feature new rules concerning compliance obligations specific to generative AI and foundation models, including several unique components such as required disclosures of copyrighted works included in the models’ training data. We will provide more details once the proposal is released.
- Separately from the EU AI Act, on September 28, 2022, the European Commission issued its proposed AI Liability Directive, which would lower evidentiary hurdles for individuals harmed by AI-related products and services and make it easier to bring civil liability claims. The timing of the AI Liability Directive is likely to be pushed back as legislators focus on finalizing the EU AI Act and related amendments to the EU Product Liability Directive.
Canada
- In June 2022, the Government of Canada put forward the Artificial Intelligence and Data Act (“AIDA”) as part of Bill C-27, the Digital Charter Implementation Act. The AIDA is still in development, with implementation not expected to occur before 2025, and enforcement sometime thereafter, but the approach lawmakers plan to adopt in the AIDA is already clear.
- As with the EU AI Act, the AIDA would impose a set of risk-based controls for users and developers of AI systems, with the majority of these obligations imposed on the highest-risk (or “high-impact”) AI. These include transparency requirements and risk-management systems that identify, assess, and mitigate the risks of harm from high-impact AI systems.
- In contrast to the EU AI Act, however, the AIDA still contains a number of gaps that are intended to be filled by lawmakers through future regulation. This includes the definition of “high-impact” AI and the list of AI use cases that would be banned outright.
Brazil
- In December 2022, the Brazilian Senate’s Commission of Jurists circulated the first draft of their Proposed Legal Framework for Artificial Intelligence (“Framework for AI” or the “Framework”) and submitted it to Congress for consideration.
- According to the Commission, Brazil’s Framework for AI would have a dual purpose. First, similar to the EU AI Act and AIDA, it would set out risk-based regulatory controls that would impose extensive requirements—including quality and risk management procedures, testing and document requirements, and transparency—on providers of designated high-risk AI systems (a category that overlaps considerably with the EU AI Act’s designations).
- Brazil’s Framework for AI would also create a set of individual rights for Brazilians impacted by AI systems. For example, under Article 5 of Brazil’s Framework, individuals would be granted the right to an explanation of any decision or recommendation generated by an AI system; the right to challenge those decisions or predictions; and the right to non-discrimination and correction of biases in such systems.
China
- China has issued a series of regulatory pronouncements over the last year. Most recently, on April 11, 2023, The Cyberspace Administration of China (“CAC”) released their draft of “Administrative Measures for Generative Artificial Intelligence Services” (“China’s Draft Measures”), which aims to govern and regulate the research, development, and use of generative AI products that provide services within the People’s Republic of China. Of particular note, China’s Draft Measures would impose security assessments on generative AI products before those products are made available to the public.
- China’s Draft Measures would impose requirements on providers of generative AI that are primarily aimed at individual protection and individual rights. This includes requirements that providers (i) adopt measures to ensure these systems do not generate discriminatory content, including specific requirements for their training data; (ii) obtain consent when using personal information in such systems; (iii) establish a mechanism for the receipt and processing of user complaints regarding the use of their information; and (iv) conduct a security assessment in accordance with the “Regulations on the Security Assessment of Internet Information Services with Public Opinion Attributes or Social Mobilization Capabilities.”
- While China’s Draft Measures apply specifically to generative AI, these themes of data protection, non-discrimination, and the requirement of a risk assessment are consistent with the broader AI frameworks currently being considered in other jurisdictions. China’s Draft Measures also highlight issues of particular concern to the Chinese government, including a requirement that content generated by AI reflects “core values of socialism” and not subvert state power, overthrow the socialist system, incite a split in the country, or undermine national unity.
What Companies Can Do to Prepare
Companies with a global presence that are adopting AI for core business operations should consider the following steps to prepare for the emerging AI regulatory landscape:
- Identify and Assess Highest-Risk AI Applications. At their core, the emerging AI regulations generally require companies to identify their high-risk AI applications, assess whether the risk is too high, and if so, either discontinue the application or implement sufficient mitigation measures to lower the risk to an acceptable level. Companies that can effectively risk assess and risk mitigate their AI applications are well on their way to basic compliance with many of the emerging AI regulations.
- Risk Factors and Assessments. Create a list of risk factors to classify AI applications as low or high risk and determine how AI applications will be assessed for risk, as well as a list of possible mitigation measures. This allows an organization to prioritize its highest-risk AI applications for the review.
- Inventory. In order to be able to identify high-risk AI applications, most companies need an inventory that includes sufficient details about the AI applications to be able to assess their risk.
- Responsible Executive or Cross-Functional Committee. Identify an executive or establish a cross-functional committee that reviews high-risk AI applications and, if necessary, implements mitigation measures that will allow for their continued use. This person or committee may also oversee the implementation of AI policies and report to senior management or the board on AI compliance and governance issues.
- Track Regulatory Developments. Tracking AI regulatory developments in key jurisdictions, allows companies to identify future obligations that would be particularly onerous to comply with, so it can start gathering the resources and take the time needed to achieve compliance before the effective date.
To subscribe to the Data Blog, please click here.
The Debevoise Artificial Intelligence Regulatory Tracker (“DART”) is now available for clients to help them quickly assess and comply with their current and anticipated AI-related legal obligations, including municipal, state, federal, and international requirements.
The cover art used in this blog post was generated by DALL-E.