policy monitor

OECD - OECD Framework for the Classification of AI systems

The OECD has proposed a classification process to critically review AI systems. The framework should help policymakers identify specific risks associated with AI, such as bias, explainability and robustness. The framework consists of five categories. It examines:

  • How AI-systems influence people and the planet
  • How AI-systems influence the economic context
  • Which data was used
  • How the AI model functions
  • What tasks the AI system performs

Through the framework, policy makers can more easily assess what can be considered risky in the context of AI.

What: Policy document

Impact score: 5

For who: governments, policy makers

URL:

The framework is a user-friendly tool to evaluate AI systems from a policy perspective and to help policymakers and legislators characterise AI systems deployed in specific contexts. It is not just about what AI systems are capable of, but also about where and how they put this into practice. For example, image recognition technology may be very useful for smartphone security, but when used in other situations, it may violate human rights.

The OECD framework consists of five categories and examines AI systems based on how they affect people and the planet; the economic context in which the AI system is deployed; what data has been used; how the AI model functions and what tasks the AI system performs. The framework can be used for any AI system, from a GPT-3 language model to credit-scoring systems.
Each of the dimensions of the framework has a subset of attributes and characteristics to define and assess policy implications and help guide an innovative and reliable approach to AI policy making and governance, as set out in the OECD AI Principles.

In particular, the framework provides a basis for:
  • To promote a common understanding of AI: to identify characteristics of AI systems that matter most, to help policy makers and others tailor policies to specific AI applications, and to help identify or develop metrics to assess more subjective criteria (such as impact on well-being).
  • Supporting sector-specific frameworks: Providing the basis for more detailed application or domain-specific overviews of criteria, in sectors such as healthcare or finance.
  • Supporting risk assessment: Provide the basis for related work to develop a risk assessment framework that helps eliminate risks and mitigate their consequences, and to develop a common framework for AI incident reporting that facilitates global consistency and interoperability of reporting.
  • Supporting risk management: contributing to the provision of information on risk mitigation, compliance and enforcement during the life cycle of AI systems, including in relation to corporate governance.
Next steps: according to the OECD announcement, the current framework is intended to lay the foundation for the development of a future risk assessment framework to help reduce and mitigate risks. It will also provide a basis for the OECD, members and partner organisations to develop a common framework for reporting AI incidents.


Watch the OECD presentation below: