Policy monitor

South Korea - South Korean Basic AI Act

South Korea unveiled the final result of one of its last projects of the year in late December 2024: the Basic AI Act (SKAIA). The legislation clearly draws inspiration from its European counterpart but has been tailored to South Korea’s specific interests and perspectives. For instance, a similar risk categorisation applies to AI systems, with ‘high-impact AI systems’ being an equivalent of the EU AI Act’s high-risk AI systems. Transparency requirements are also featured in the SKAIA, with specific labelling requirements being imposed on certain AI outputs such as deep fakes. The SKAIA places a strong emphasis on fostering innovation and encouraging investments in the technology sector. The South Korean legislation is expected to take effect in January 2026.

What: law

Impactscore: 1

For who: AI providers/importers/manufacturers, Belgian/European legislators

URL: https://likms.assembly.go.kr/b...

Secondary sources:

https://iapp.org/news/a/analyzing-south-korea-s-framework-act-on-the-development-of-ai

https://www.linkedin.com/pulse/united-korean-ai-act-bill-contents-comparison-eu-english-min-h98kc/?trackingId=WMuhLWxQTtW1cVQsjVt5wA%3D%3D

https://iclr.net/uncategorised/korean-national-assembly-passes-the-ai-basic-act/

https://www.legal500.com/developments/thought-leadership/a-new-era-for-ai-republic-of-korea-takes-a-bold-step-with-ai-regulation/

Key takeaways for Flanders:

The Act shows that AI regulation is being approached by other countries in a similar way as done by the European legislature. This suggests that the EU AI Act could have a similar impact on international AI regulation as, for example, the General Data Protection Regulation (GDPR) did at the time. This could strengthen the competitive position of Flemish AI developers.


In December 2024, the Basic Act on the Development of Artificial Intelligence and the Establishment of Trust – also referred to as the South Korean AI Act (SKAIA) – was passed by South Korea’s National Assembly. The law unifies 19 separate proposals into a cohesive framework designed to balance innovation in AI with ethical, safety and societal responsibilities.

The SKAIA aims to foster AI development while addressing risks, particularly in high-impact sectors such as healthcare, public services, and safety-critical applications. Key elements of the Act include a classification system to identify “high-impact AI systems”, transparency requirements (for high-impact or generative AI), risk management (i.e., risk management plans, user protection measures, and human oversight), the establishment of governance bodies, and ethical guidelines.

With the SKAIA set to take effect in January 2026, South Korea’s Ministry of Science and ICT is committed to creating policies and legislation that will assist businesses in preparing for compliance. This provides a one-year window for organisations to align their operations with the new rules.

A lot of the terminology used in the EU AI Act can be found in the SKAIA as well. Below is an outline of some similarities and differences:

  • Both regulations intend to protect human rights and dignity and share the aim to improve quality of life. This is illustrated by the ‘Influence Assessment’ included in the SKAIA, which could be compared to the fundamental rights impact assessment in the EU AI Act.
  • Both regulations share a risk-based approach, working with categories of AI systems.
    • Akin to article 6 EU AI Act which identifies “high-risk AI systems”, the SKAIA targets mainly “high-impact AI”. Like the EU AI Act, the SKAIA imposes stricter requirements on these high-impact AI systems, such as certain measures concerning risk management (identification and mitigation) and technical requirements (user protection measures, human oversight, etc.). In the SKAIA, the high-impact category consists of AI systems that “affect life, physical safety, and fundamental rights”, while also listing sectors that are targeted, like the healthcare sector. Both the EU AI Act and the SKAIA therefore aim at AI systems that could potentially harm individuals and AI systems in impactful sectors.
  • Both instruments include certain transparency obligations.
    • The SKAIA includes an obligation to inform users when they interact with a product or service that uses high-impact or generative AI. It also requires that content generated by generative AI be clearly labeled and that users receive notifications about synthetic media (i.e., artificially created content). These provisions seem to be a response to the growing spread of synthetic content and the related disinformation challenges South Korea faces.
    • This reflects article 50 EU AI Act, which also provides transparency obligations for certain AI-generated content, like deep fakes or synthetic content.
  • Oversight of the SKAIA is expected to be carried out by several new institutions, including a National AI Committee, an AI Policy Center, and an AI Safety Research Institute. These bodies are intended to collaborate on setting guidelines, ensuring compliance, and supporting policy development and international cooperation. This approach contrasts with the EU, where, in addition to newly created institutions (like the AI Office), much of the responsibility remains with existing national competent authorities. While South Korea has a similar oversight mechanism for product legislation (where authorities certify products and services through "KC Certification" and sector-specific appointed authorities conduct post-market surveillance) the SKAIA does not seem to rely on these existing bodies. Instead, it establishes new institutions for oversight.
  • Notably, the SKAIA also imposes risk management obligations (risk identification, assessment, and mitigation) on AI systems that exceed a specific computational threshold, alongside high-impact AI systems.
  • Both regulations emphasise ethical AI development. The SKAIA aligns with European principles by embedding guidelines for human rights, human dignity, and societal well-being, while promoting national competitiveness.
  • Like the EU, South Korea tries to bolster innovation. The SKAIA incorporates industrial support measures to foster AI innovation and investment in AI. In addition to encouraging investment in the South Korean tech sector, it mandates government support for the smooth implementation of temporary permits and regulatory sandboxes for AI-converged products and services.
  • The SKAIA's repercussions of non-compliance are less far reaching than within the EU. Not only are the fines significantly lower, the obligations themselves are less stringent than in the EU AI Act. Businesses that fail to comply, face fines of up to 30 million KRW (approximately EUR 20,000), significantly lower than the penalties under the EU AI Act, which can reach EUR 35 million or 7% of global annual turnover.

Closing remarks

While the two frameworks differ in specific obligations, much of the underlying rationale aligns. Many of the values embedded in the SKAIA are consistent with the European approach, demonstrating a shared commitment to ethical and responsible AI development.