1. AI definition in the new framework
The UK’s new framework defines AI by the characteristics “adaptivity” (it is difficult for humans to discern their workings and forms of inference) and “autonomy” (AI systems can make decisions without intent or ongoing control by a human). Because of these characteristics, the outputs of a system are difficult to explain, predict or control and responsibility in relation to the system is difficult to allocate. With this definition, the UK government also hopes to include unanticipated new technologies that are autonomous and adaptive in its approach. Aside from these two characteristics, no strict legal definition of AI is adopted. The approach expects that regulators will adapt domain-specific definitions of AI based on these two characteristics and it prepares to coordinate such definitions to ensure that they align. The government also mentions that it retains the ability to define AI later if necessary for the pro-innovation framework.
2. A context-specific approach
The framework will not assign rules or risk levels to sectors or technologies but will instead focus on regulating the outcomes of AI applications in their context. In this context-specific approach, regulators will be expected to weigh the risks of using AI against the costs of missing opportunities by not using AI, including the failure to employ AI in AI risk assessments, in their enforcement actions.
3. Principles-based approach
Five cross-sectoral principles support the approach to enable responsible AI design, development and use. They must be applied proportionally by regulators when they implement the framework and issue guidance, although regulators have ample discretion in this aspect. The principles are:
- Safety, security and robustness – Systems should function in a robust, secure and safe way throughout their life cycle with continual risk assessment and management. This includes embedding resilience to security threats into the system. Regulators may introduce measures to ensure technical security and reliability. Regulators should also coordinate their regulatory activities and may require regular testing or the use of certain technical standards.
- Appropriate transparency and explainability – Appropriate information (such as when, how and why AI systems are used) should be communicated to the relevant people and it should be possible for relevant parties to access, interpret and understand the decision-making of the AI-system. Regulators must have enough information about the system, and its inputs and outputs to apply the other principles (e.g. accountability). Parties directly affected by the system should also have enough information to enforce their rights. Regulatory guidance, product labelling and technical standards may support this principle in practice. The level of explainability provided should be appropriate to the context of the AI system, including its risks and the state of the art as well as the target audience.
- Fairness – AI systems may not undermine legal rights of individuals and organisation, may not unfairly discriminate or cause unfair market outcomes. Regulators may develop and publish descriptions of fairness for AI systems in their own domain, taking into account applicable law, as well as guidance for relevant law, regulation and standards. Regulators will also need to ensure that AI systems in their domain follow the fairness principle and may need to develop joint guidance with each other if their domains intersect.
- Accountability and governance – There should be effective oversight over the supply and use of AI systems with clear lines of accountability, meaning that the appropriate measures are taken to ensure proper functioning of the AI systems throughout their lifetime. This includes that AI life cycle actors must implement the principles at all stages of the AI system’s life cycle. Regulators need to create clear expectations for compliance and good practice, and may encourage the use of governance procedures to meet these requirements. Regulatory guidance must reflect the responsibility to demonstrate accountability and governance as well.
- Contestability and redress – Users, impacted parties and actors in the AI life cycle must be able to contest an AI decision that is harmful or creates a risk of harm. Regulators must clarify existing means of contestability and redress and ensure that the outcomes of AI systems are contestable where appropriate. Regulators should also guide regulated entities to ensure clear routes for affected parties to contest AI outcomes or decisions.
The UK government expects regulators to issue or update guidance following the principles and will monitor the effectiveness of the principles and the framework. The principles will currently not be included in legislation, although the UK government is prepared to include a duty for regulators to have due regard for the principles if necessary for the correct application of the principles. This would provide those regulators with a statutory basis to apply the principles in their work and allow them discretion in determining the relevance of each principle for their domain. Broader legislative changes may also be made if necessary for the application of the principles (e.g. if an existing legal requirement would otherwise block a regulator from applying them). Regulators, either alone or in collaboration, are expected to prioritise among the principles if principles come into conflict with each other. If an AI-risk does not fall clearly within the competences of an existing regulator, regulators and government will collaborate to identify further actions (e.g. changes to competences, legislative interventions etc.). Regulators, in collaboration with government, will further develop guidance for themselves to interpret the principles.
4. Central functions and accountability
Aside from the five cross-principles, the UK government will also put in place and provide new central functions to support the framework and the work of regulators. The functions notably include:
- monitoring, assessment and feedback – Monitoring and evaluating the new regime, gathering data to assess the effectiveness of the framework, supporting regulators in their own monitoring and evaluation as well as providing advice to ministers on issues to be addressed.
- Supporting coherent implementation of the principles – Developing guidance to support regulators in implementing the principles, identifying obstacles affecting regulators in their implementation of the principles, evaluating conflicts in how the principles are interpreted by different regulators and monitoring the relevance of the principles.
- Cross-sectoral risk assessment – developing a register for AI risks to support regulators’ risk assessments, monitoring and reviewing known risks and identifying new risks, clarifying responsibilities for new risks or risks with conflicted responsibilities with regulators to ensure coherency and finding where risks are not covered adequately.