Consultation paper on AI regulation: emerging approaches across the world
UNESCO published a consultation paper on the emerging regulatory approaches to AI across the globe. This consultation paper explores nine different regulatory approaches across the world and was open to stakeholder feedback. A revised version is expected to be published shortly.
What: consultation paper
Impactscore: 4
For who: Policy makers, legal experts and AI Governance experts.
URL: https://unesdoc.unesco.org/ark:/48223/pf0000390979
Key takeaways for Flanders
Policy makers can use this consultation paper for an overview of regulatory approaches that can be applied in their legislative work. Crucial is that different approaches can be combined when regulating AI based on a legislator’s needs and strategy.
Legislative bodies worldwide are increasingly interested in regulating AI. Therefore, an overview of possible legislative approaches is important. The aim of UNESCO’s consultation paper is to contribute to the ongoing global debate on how best to regulate the development and use of AI systems. It first delineates its scope and provides definitions of key terms (such as “AI systems”) used throughout the paper. Secondly, it discusses the global regulatory landscape around AI. This shows that, in addition to regulation through existing transversal regulation and human rights regulation, legislative bodies across the world are increasingly submitting AI bills or referring to AI regulation during the legislative process. Thirdly, and primarily, the paper outlines 9 emerging regulatory approaches being used by legislators on a global scale. The paper finally concludes with a recommendation for policymakers to develop tailored regulations that address the specific needs and challenges of their respective countries.
9 regulatory approaches
The paper outlines 9 regulatory approaches to AI. These approaches form a spectrum of regulatory intensity and they are structured from less interventionist, non-binding, principles-based guidelines to more coercive and demanding regulatory models. They are not listed in order of importance or desirability. The approaches can be combined and are not mutually exclusive.
- Principles-based approach: If the AI regulations are solely based on principles, they do not impose specific obligations or limits. The regulations provide stakeholders with a foundational set of principles to guide human-rights-abiding, ethical, responsible and human-centric development and use of AI systems. This is the case in legislation Peru, Brazil, Colombia and Costa Rica, amongst others. Principles combined with specific obligations and rights can serve as orientation or interpretation of mandatory rules, such as in the United Kingdom's 'Pro innovation to AI regulation' proposal.
- Standards-based approach: These regulations (partially or totally) delegate regulatory powers to public, private or hybrid organisations that develop standards. Professional and industry organisations can participate in the development of standards that guide the implementation of mandatory rules, such as in the EU AI Act.
- Agile and experimentalist approach: Policymakers develop flexible regulatory frameworks, such as regulatory sandboxes, that enable public and private organisations to test new business models, methods, infrastructure and so on, with the supervision of public authorities. These sandboxes have been implemented in the EU AI Act, the United Kingdom's 'Pro innovation to AI regulation' proposal and the Brazilian AI bill, amongst other regulations.
- Facilitating and enabling approach: These regulations aim to facilitate and enable an environment that encourages the development and use of responsible, ethical and human rights-compliant AI systems. For example, these instruments aim to build human capital, infrastructure etc. The UNESCO Readiness Assessment Methodology and several Latin American bills are documents related to this approach.
- Adapting existing laws approach: Instead of issuing new AI bills, policymakers adapt sector-specific rules and transversal rules to make progressive improvements to existing regulatory frameworks (e.g. data protection laws, labor laws, criminal codes,…) The European legislator did this, for example, with Article 22 GDPR.
- Access to information and transparency mandates approach: This regulatory approach entails the deployment of transparency instruments to enable the public to access basic information about AI systems, which is the case in France, in the EU AI Act, Colombia and several Latin American countries, amongst other countries.
- Risk-based approach:. The obligations of the regulations are based on the level of risk posed by use and development of different types of AI systems in specific contexts. For instance, this approach is implemented in the Canadian Directive on Automated Decision-Making, the EU AI Act, and several Latin American AI bills.
- Rights-based approach: This approach aims to ensure the protection of individuals’ rights and freedoms during the AI system’s lifecycle, for example the GDPR rules on treating personal data with automated decision-making systems or Brazilian bill no. 2238/2023 on rights for persons affected by AI systems.
- Liability approach: Based on this approach regulations assign responsibility and sanctions to problematic use of AI systems. Standards of conduct are mandated by criminal, administrative or civil liability which is the case in the EU AI Act and the Brazilian AI bill.
3 key questions as guidance for parliamentarians
The consultation paper provides 3 key questions to consider for parliamentarians interested in exploring AI regulatory instruments, as well as criteria and input for answering the questions. The first question, ‘Why regulate?’, can be met with 3 reasons to regulate: addressing a public problem; protecting and promoting fundamental and collective rights; and/or achieving a desirable future.
The second question, ‘When to regulate?’, is answered by UNESCO using 4 steps or requirements. The time for regulation is when justification for regulating and relevant regulatory tools are present, no other policy tools are better suited and regulation is feasible (legally and politically). To answer the third and last question, ‘How to regulate?’, UNESCO provides a few recommendations to take into account during the regulatory processes. These recommendations include consideration for human rights and digital divides, agile regulation, participatory and inclusive legislative processes, responding to specific policy challenges through evidence-based processes, and learning from best practices in other jurisdictions.
What’s next?
The publication of this consultation paper opened a feedback process, during which UNESCO sought input from various stakeholders. A revised version of the paper is expected to be presented at an upcoming IPU assembly in October 2024.