South Korea's 'Framework Act on the Development of Artificial Intelligence and Establishment of Trust' (AI Basic Act), passed by the National Assembly in January 2025, took effect on January 22, 2026. As the world's second comprehensive AI regulatory framework after the EU AI Act, it introduces specific obligations for high-impact AI and generative AI operators. A grace period of at least one year applies before administrative fines are imposed, but legal obligations are effective from the enforcement date.
Background: Why the AI Basic Act
The AI Basic Act pursues two parallel objectives: fostering AI industry growth and protecting citizens' fundamental rights. Previously, AI was indirectly regulated through existing laws such as the Personal Information Protection Act and E-commerce Act. The dedicated AI framework was created to address the rapid proliferation of AI technology. The Ministry of Science and ICT (MSIT) serves as the supervising authority and is currently finalizing the detailed enforcement decrees.
Key Provisions: High-Impact AI and Generative AI Obligations
High-impact AI is defined as 'AI systems that significantly affect human life, physical safety, or fundamental rights, or that pose risks of doing so.' Covered sectors include healthcare, energy, transportation, hiring, biometric analysis, and government decision-making.
- Fundamental rights impact assessment: High-impact AI operators must conduct impact assessments before launching services
- Risk management framework: Operators must establish risk management plans spanning the system's lifecycle and report outcomes to MSIT
- Transparency requirements: AI-generated audio, images, or video that are difficult to distinguish from human-created content must be clearly labeled
- Prior notification: Users must be informed in advance when a product or service utilizes AI
Foreign companies must designate a domestic representative if they meet any of these thresholds: annual revenue exceeding 1 trillion KRW (~$681M), domestic AI service revenue exceeding 10 billion KRW, or an average of 1 million daily Korean users over the preceding three months. Violations carry administrative fines of up to 30 million KRW (per Cooley's legal analysis).
Implications for Voice AI Companies
Voice AI services may not directly fall under the high-impact AI classification, but when deployed in healthcare guidance, financial advisory, or recruitment screening, they are likely to be classified as high-impact AI. Voice synthesis and conversational AI, in particular, are subject to generative AI transparency obligations.
The grace period is a grace period for penalties, not for obligations. All legal duties are effective as of January 22, 2026. While fines may not be immediately imposed, preparation for the regulatory framework should begin now.
What's Next: Enforcement Decrees and Corporate Roadmap
MSIT is finalizing the detailed enforcement decrees and supporting corporate compliance through its integrated support center. Companies should prioritize three actions: assess whether their AI services qualify as high-impact AI, build labeling and notification systems for generative AI outputs, and design an internal process for fundamental rights impact assessments.