South Korea has become the latest country to pass a national AI law. The “Basic Act on the Development of Artificial Intelligence and Establishment of Foundation for Trust” (the “Basic Act” or the “Act”), which has several similarities to – and differences from – the EU AI Act, and comes into force on January 22, 2026.
Like its EU counterpart, the Basic Act adopts a risk-based approach to regulating the deployment, operation and development of AI systems, with the more onerous requirements applying to specific “high-risk” use cases only. It also imposes new duties on South Korea’s executive branch to oversee and set standards for AI deployment and development. Currently, there is no official English translation of the Act, so our analysis is based on an unofficial translation.
In this article, we provide an overview of the Basic Act’s requirements that are most relevant to businesses and compare them to the obligations of other AI-specific laws.
Basic Act’s Scope: Who Has to Comply?
The Basic Act has very broad extraterritorial effect; it applies to all AI-related actions performed abroad that affect South Korea’s domestic market or users. Currently, there is no further guidance on how this requirement will be interpreted or applied in practice. However, the Act is likely to be more relevant for businesses that have entities or operations established in, or that target business in, South Korea.
The law covers both businesses that develop or provide AI systems themselves (“Developers”) as well as those that use AI systems developed by third parties where that use falls within the jurisdictional scope of the Act (“Deployers”, or “Operators” under the Act’s language).
A Risk-Based Approach: Four Categories of Requirements
The Basic Act defines AI as “the electronic implementation of human intellectual abilities, such as learning, reasoning, perception, judgement, and understanding of language.” This is thematically aligned with the OECD’s AI definition, which has been adopted by most other AI laws.
The Act’s requirements and restrictions do not apply to AI developed or deployed for South Korean national defense or security purposes.
The Act takes a tiered risk-based approach to regulating AI, imposing different requirements on four main categories of AI systems: (1) all AI systems covered by the Act’s jurisdictional scope; (2) “high-risk” AI systems (called “high-impact” systems under the Act’s terminology); (3) generative AI systems; and (4) AI systems with underlying models that exceed a certain level of compute power.
1. All Covered AI Systems
The Basic Act sets in place general regulations that apply to Developers and Deployers of all types of AI systems:
- AI risk assessments. Developers and Deployers must examine whether their relevant AI products and services qualify as high-risk under the Act. They may request the Minister of Science and Information Communication Technology (the “Minister of Science and ICT”) to run an assessment verifying the status of their AI systems for them.
- Registration of domestic agents. Businesses that do not have physical offices in South Korea and that meet certain criteria yet to be determined by Presidential decree (such as revenue or number of users) will have to appoint a local domestic agent. The domestic agent will report required AI risk assessments to the government and will support the implementation of risk management protocols for high-risk AI. Failure to comply with requirements to establish a registered agent may result in a maximum administrative fine of 30 million won (approximately $21,000).
- Internal ethics & compliance. Developers and Deployers may choose to establish an internal committee to verify compliance with their ethical principles and participate in research and development goals related to AI safety and ethics.
2. High-Risk AI Systems
The most onerous obligations under the Act apply to “high-risk” AI systems. These are defined as any AI system that has a serious impact on or is likely to cause danger to human life, physical safety, and basic rights and is utilized in any of the following areas:
- A judgment or evaluation that has a significant impact on an individuals’ rights or obligations, such as hiring decisions and loan reviews;
- The development and use of certain medical devices;
- The supply of energy and drinking water;
- The management and operation of nuclear materials, and transit and traffic systems;
- Analysis and use of biometric information in criminal investigations or arrests;
- Student evaluation in early education; and
- Any other areas to be determined in the future by Presidential decree.
The Basic Act sets out a variety of obligations related to high-risk AI systems, most of which apply to Deployers, with only a small subset applying to Developers. These requirements include:
Requirements for high-risk AI system Deployers:
- Government certification and reporting. The Minister of Science and ICT is required to inspect high-risk AI systems, and certify that they comply with the Basic Act’s requirements, in advance of their deployment. In particular, the Minister’s focus will be on ensuring that these high-risk AI systems meet safety and reliability standards.
- User disclosures. Deployers of high-risk AI must notify users in advance that their product or service involves the use of high-risk AI.
Requirements for both high-risk AI system Deployers and Developers:
- Risk management. Both Developers and Deployers providing high-risk AI must implement certain measures to further ensure the safety and reliability of their high-risk AI systems. Specifically, they must:
◦ Create and implement risk management plans;
◦ Generate AI model cards to explain final results derived by AI, the main criteria used to derive the final results and training data used;
◦ Establish guardrails to protect users;
◦ Maintain adequate human oversight; and
◦ Keep records regarding measures implemented to ensure safety and reliability.
- Investigations. Finally, both Developers and Deployers should be prepared to submit relevant data to authorities or undergo investigations related to their failure to provide high-risk notice or their failure to meet obligations to identify risks and respond to AI safety incidents.
3. Generative AI Systems
The Basic Act sets out new transparency and labeling requirements for Deployers of Generative AI (“GenAI”). Deployers must:
- Notify individual users in advance that their product or service uses GenAI.
- For any output created by GenAI, indicate that it is AI-generated, especially when that output is used as part of products. It is unclear what steps are required to “indicate” that output is AI-generated, but presumably it is a lesser standard than the “obvious” labelling required for deepfakes (see below).
- For GenAI systems that create deepfakes and other content that is “difficult to distinguish from reality,” label the output as AI-generated in a way that obviously shows to users that it is AI-generated e.g., via watermarking.
4. AI Systems with Models Exceeding Certain Compute Power
The final category of requirements applies to AI systems that are underpinned by models that meet certain computational thresholds that will be established in a future Presidential decree. It is currently unclear how those models might be scoped, but based on similar requirements in other AI laws the Act will likely cover the most frontier foundation models, at least when the Act goes into effect (e.g., GPT 5 or Llama 4).
The Basic Act requires that Developers and Deployers of these powerful models conduct risk assessments and establish risk management plans. Based on comparable requirements for the high-risk AI systems, and under other industry standards, they likely will include provisions accounting for incident response, providing human oversight, establishing guardrails to protect users, generating model cards to explain how AI results were produced (e.g., training data used), and keeping records of measures taken to ensure safety and reliability of AI systems. The implementation and results of the risk assessments and risk management plans will have to be reported by the registered domestic agent to the Minister of Science and ICT.
Fines & Penalties
If the Minister of Science and ICT determines that a business has violated any of the Act’s requirements, it can order the business to take necessary measures to suspend or correct the relevant breach. However, the Minister is permitted to impose administrative fines (of up to 30 million won, or approximately $21,000) only for breaches of three of the Act’s requirements. Specifically for:
1. Failing to inform individuals where services are being provided using high-risk AI or GenAI;
2. Failing to designate a local domestic agent where required; and
3. Failing to comply with any order from the Minister for suspension or correction.
Comparison to Other AI Regulations Globally
The global AI regulatory landscape is in a highly fluid state. While many jurisdictions and regulators will likely introduce AI-specific regulation this year, so far only a small number of such laws have been passed. The Basic Act is the latest addition to this growing body of law.
While the Act claims to be based on the EU AI Act, and adopts a similar risked-based approach to regulation as the Colorado AI Act (the “CAIA”), there are notable substantive differences among the three.
- The Basic Act, like the CAIA, does not contain any AI bans or prohibitions. By contrast, the EU AI Act contains eight AI use cases that are prohibited.
- While all three Acts predominantly regulate “high-risk” use cases, all three contain different definitions of, and have different requirements for, such use cases. Even where there is overlap in what each law considers a high-risk use case, each regulation’s definitions vary slightly. For example, all of the Acts classify employment-related AI use as high-risk. However, while the EU AI Act covers AI systems intended to be used for any recruitment or employment-related use, the Basic Act is narrower and is limited only to AI uses that involve a judgment or evaluation that has a significant impact on hiring or employment. Clients should be aware of these nuances and – to the extent they are covered by the law – ensure that they are captured within the scope of any AI governance programs.
- The Basic Act includes some (limited) compliance obligations for all AI systems covered by its scope, irrespective of their risk profile. A key difference among the Basic Act, the CAIA, and the EU AI Act, is that the Basic Act contains some basic requirements for all AI systems covered by the Act, irrespective of their “risk” classification, whereas the CAIA and EU AI Act impose requirements on AI systems that fall into the (typically narrow) prohibited risk/high risk/transparency obligation categories, as applicable. Consequently, in some respects the breadth of South Korea’s new law is wider.
- The Basic Act has (slightly) narrower transparency requirements. While both the EU AI Act and the Basic Act have specific transparency requirements, the EU AI Act’s requirements are broader and more prescriptive, and – unlike the Basic Act – are not limited to generative AI.
Four Practical Takeaways
1. Prioritize reducing operational risk with your AI governance program. The AI regulatory landscape is highly fluid and rapidly evolving. Building an AI governance program to meet the requirements of all the pending AI laws that are not actually in effect is extremely difficult due to the inconsistencies among them and because many of the necessary details and guidance have yet to be released. Moreover, these laws may get amended before they come into effect and their effective dates may get delayed. Instead, it is better to start by focusing on reducing operational AI risks, by ensuring that the organization’s AI tools and use cases deliver value, work as intended, and do not cause unexpected or unnecessary problems. Once these goals are largely achieved, organizations can look at the pending AI regulatory requirements that are likely to come into effect soon and conduct a gap assessment.
2. Determine whether the Basic Act will likely apply to your business. Organizations should also assess whether they are likely to be covered by the Basic Act and, if so, identify which of their AI systems are likely to be in scope. In the absence of further guidance, on a plain reading, the Act appears to have broad extraterritorial effect. Again, however, in practice, it likely will be of greater relevance for businesses with entities or operations established in South Korea, or that target business within the country.
3. Conduct a gap assessment. Once AI-related operational risks have been addressed and the organization has determined that the Basic Act is likely to apply to its AI program, a gap assessment should be conducted to determine where significant gaps might exist, and the time and resources that will be needed to close those gaps, which may include the need for additional personnel, software and budget. In particular, clients should consider whether any AI risks assessments being done for other purposes can also be used for compliance with the Basic Act.
4. Continue to monitor developments in Korea. While it currently appears that many of the governance procedures required by the EU AI Act and CAIA will also satisfy the obligations under Basic Act, there are important differences at the margins. Organizations that may be subject to two or more of these laws should keep track of any significant guidance, clarifications, or amendments that are released in connection with these laws, as they many significantly impact the nature and burden of compliance.
*****
To subscribe to the Data Blog, please click here.
The cover art used in this blog post was generated by ChatGPT.