EU-US Trade & Technology Council – TTC Joint Roadmap for Trustworthy AI and Risk Management
The EU and the US have created the Joint Roadmap to strengthen their technological and industrial leadership, expand bilateral trade and investment and promote their values worldwide in the field of AI. Through this partnership, they hope to foster innovation, trust and competitiveness in AI. The roadmap aims to advance shared terminologies and taxonomies for AI, inform approaches to AI risk management and trustworthy AI, build a common repository of metrics for measuring AI trustworthiness and risk management methods and inform and advance collaborative approaches in international standards bodies.What: policy guiding document
Impact score: 3
For who: Policy makers, industry, research
URL:
Outcomes TTC 31/05/2023:
https://ec.europa.eu/commission/presscorner/detail/en/statement_23_2992
Takeaways for Flanders:
- The Joint Roadmap foresees the creation of knowledge-sharing mechanisms to exchange cutting-edge scientific
research on AI and its associated risks, which have implications for trade and technology. - The EU and US are working on a common AI Code of Conduct
Both the EU and the US aim to operationalize the values of a risk-based approach and trustworthy AI through their respective initiatives, such as the EU AI Act, the High-Level Expert Group on AI, the NIST draft AI Risk Management Framework, and the Blueprint for an AI Bill of Rights. While there may be differences in regulatory approaches, both sides acknowledge the significance of shared values in guiding the advancement of emerging technologies. This Joint Roadmap emphasizes the need for scientific support, international standards, shared terminology, and validated metrics.
Objectives of the Joint Roadmap
- AI terminology and taxonomy
The EU and US intend to map terminology and taxonomy in key EU and US documents and develop a shared understanding. This work will also leverage the global work already done and ongoing (such as within the International Organization for Standardization [ISO], OECD, and Institute of Electrical and Electronics Engineers [IEEE]. - EU-US cooperation on AI standards and tools for trustworthy AI and risk management
A) AI standards
The EU and US aim to lead in international standardization efforts, promoting open and transparent development of technically sound and performance-based standards. Global leadership and cooperation on international AI standards are essential to establish consistent rules for market competition, prevent trade barriers, and foster innovation. The EU and US aim to provide leadership by actively participating in international standards development, adhering to WTO principles, and identifying gaps for future development. They will engage stakeholders, prioritize AI trustworthiness, bias, and risk management, and include small and medium-sized enterprises in the process.
B) Tools for trustworthy AI and risk management
The EU and United States will collaborate to create a shared repository of metrics and methodologies for measuring AI trustworthiness and risk management, including environmental implications. They will analyze existing tools and standards from various stakeholders to identify commonalities, gaps, and areas for improvement. Findings from these studies will inform the development of AI standards and facilitate the deployment of trustworthy AI tools aligned with those standards. - Monitoring and measuring existing and emerging AI risks
The EU and US aim to establish knowledge-sharing mechanisms to exchange cutting-edge scientific research on AI and its associated risks, which have implications for trade and technology. They plan to take concrete steps in two key areas. Firstly, they seek to develop a tracker that identifies existing and emerging risks in AI, based on context, use cases, and empirical data. This tracker will serve as a common ground to define the origin and impact of risks, organize risk metrics, and establish methodologies for risk avoidance or mitigation. It will be continuously updated to incorporate new risks arising from development dynamics, improved understanding of potential harms, compound risks from system interactions, and novel AI methods or contexts of use. Secondly, they aim to create interoperable tests and evaluations for AI risks. These evaluations will enhance research communities, establish methodologies, support standards development, facilitate technology transfer, inform consumer choices, and promote innovation through transparent system functionality and trustworthiness. Evaluations will consider the context of AI deployment, associated harms and benefits, and the evolving nature of AI technology, including its diverse architectures and complex behaviour. The focus will be on trustworthiness characteristics alongside traditional metrics like accuracy.
For each objective, a dedicated work group has been established. The groups have until now:
- Issued a list of 65 essential AI terms that are essential for understanding risk-based approaches to AI. These terms include their interpretations in the European Union and the United States, as well as shared definitions agreed upon by both parties.
- Mapped the involvement of the EU and US in standardization activities
Because of recent developments in Generative AI, this theme will also be included in the Joint Roadmap. More details will follow at a subsequent TTC consultation.
Further activities of the TTC (in addition to the Joint Roadmap)
The EU and US have joined forces to create a voluntary AI Code of Conduct as a proactive measure prior to formal regulations being implemented. The objective is to establish non-binding international standards concerning risk audits, transparency, and other necessary criteria for companies involved in AI development. The finalized AI Code of Conduct will be presented to G7 leaders as a collaborative proposal between the EU and US, and companies will be encouraged to voluntarily adopt it.