The UK government has recently published a Policy Paper setting out its early proposals for what the UK’s regulatory framework in respect of artificial intelligence (AI) might look like (the “Framework”). This follows the National Artificial Intelligence Strategy, which was published in September 2021 and specified AI regulation as a priority for the UK government.
The proposals are in their earliest stage at present, but they provide insight as to the UK government’s likely approach to regulating AI. They also indicate some notable departures from the European Union’s (EU) approach to AI regulation, as set out in the EU’s draft Artificial Intelligence Act.
At this stage, the UK government is clearly keen to attract AI developers to the UK by promising a regulatory environment that will nurture development and innovation. Indeed, the Policy Paper itself is entitled “Establishing a pro-innovation approach to regulating AI”.
While the proposals are very high level, we set out the key points of interest below. Naturally, the Framework will develop significantly over the coming months and years.
1. Approach to Regulation
The UK government’s approach to AI regulation will put a particular emphasis on the context in which an AI system is used. Existing regulators will be required to regulate AI in their own “sectors and domains”: the Framework will establish core “cross-sectoral principles”1 and leave regulators to “interpret, prioritize and implement” those principles, as well as identify and assess risk at the application level.
The UK government’s initial proposals for the “cross-sectoral principles” are as follows.
1. Ensure that AI is used safely.
2. Ensure that AI is technically secure and functions as designed.
3. Make sure that AI is appropriately transparent and explainable.
4. Embed considerations of fairness into AI.
5. Define legal persons’ responsibility for AI governance.
6. Clarify routes to redress or contestability.
Under the Framework, each regulator (see “Regulators” section below) would be asked to ensure that each of these principles is satisfied, and then proceed to impose and implement its own restrictions and regulations based on its own assessment, resulting in “sector or domain-specific AI regulation measures”. Notably, the UK government will be asking regulators to “focus on high risk concerns rather than hypothetical or low risks associated with the AI” in order to encourage innovation and avoid unnecessary barriers.
This approach is notably different from the EU’s approach, which establishes central regulatory principles with less scope for adaptation by regulators. The UK’s principles-based approach, which mirrors business and investment-friendly regulation in other areas including subsidies and financial services, is to be welcomed.
2. Purpose of Regulation
The framework will aim to “maximize the full opportunities which AI can bring to the UK”, while also building confidence in the ethical and responsible use of AI in the UK. The UK government intends to strike this balance by “developing a pro-innovation, light-touch and coherent regulatory framework, which creates clarity for businesses and drives new investment”.
At this stage, the Framework seems more focused on encouraging innovation and development than the EU approach which, on one view, can be characterized as a more cautious ‘safety first, innovation second’ stance. That being said, the EU approach is said ultimately to encourage innovation, and it may very well be that as the Framework is further codified the two regimes will align more closely in their priorities.
3. Definition of “AI”
The Framework does not, and the UK government does not propose to, set out one universal definition of AI.
The UK government expressly adopts a different stance to the EU in its approach to defining “AI”. It notes that the EU “has set out a relatively fixed definition”, which does not “[capture] the full application of AI and its regulatory implications” and which the UK government does not believe is “right for the UK”.
Instead, the UK government’s approach is to “set out the core characteristics of AI to inform the scope of the AI regulatory framework but allow regulators to set out and evolve more detailed definitions of AI according to their specific domains or sectors”, the aim being to “regulate the use of AI rather than the technology itself”.
Accordingly, the Framework identifies two core characteristics which underlie core issues which current regulation may not be in a position to address. Those core characteristics are the “adaptiveness” and the “autonomy” of the technology. In considering their definitions of AI, regulators should use these core characteristics as their starting point.
In short, the UK government’s approach will be to “set out the core characteristics and capabilities of AI and guide regulators to set out more detailed definitions at the level of application”.
4. Risk Management Approach
The Framework is built on a proportionate risk-based approach. In principle, each regulator will be required to identify and assess the risks of AI systems on the “application level”, the idea being to enable “targeted and nuanced” responses to risk. The UK government hopes that this approach will allow regulatory structures to adapt quickly to emerging risks in particular areas.
The Framework puts a notable emphasis on the “light-touch” that regulators will be encouraged to take, for example by adopting “voluntary or guidance-based” approaches. Also of note is the Framework’s suggestion that regulators will be asked to focus on “applications of AI that result in real, identifiable, unacceptable levels of risk, rather than seeking to impose controls on uses of AI that pose low or hypothetical risk”.
In short, the Framework proposes that the UK government will set out the cross-sectoral principles, and then leave regulators to “lead the process of identifying, assessing, prioritising and contextualising the specific risks addressed by the principles”, likely with the aid of further supplementary or supporting guidance issued by the UK government.
The UK government does not propose to establish a central AI regulator or to create a body to coordinate the regulation of AI across the various domains (akin to the EU’s European Artificial Intelligence Board). Instead, the Framework proposes the use of existing regulators to regulate AI in their respective domains, in a way that “catalyses innovation and growth”.
The Framework sets out examples of regulators which are already addressing AI, including the Information Commissioner’s Office, the Equality and Human Rights Commission, and the Bank of England and the Financial Conduct Authority (through their joint establishment of the Artificial Intelligence Public-Private Forum) . Among a number of other regulators, the UK government states that it will also work with the Competition & Markets Authority, which is currently investigating various AI-related cases, on the implementation of the proposals.
There is concern that the use of existing regulators will lead to inconsistency and confusion for consumers and businesses. The Framework acknowledges such concerns, including potential lack of clarity, overlap between regulators’ remits, inconsistency between the powers of regulators and gaps in the current approach and existing regulation. While the Framework does not set out how these challenges will be addressed, it acknowledges that they will need to be considered.
The Framework also acknowledges the potential lack of uniformity in its context-driven approach, which it aims to address by implementing the cross-sectoral principles (see above).
6. Recourse and Penalties
As one of the cross-sectoral principles, regulators will be required to “implement proportionate measures to ensure the contestability of the outcome of the use of AI”. Given the Framework’s proposal to use existing regulators to regulate AI in their domains, it would follow that each regulator’s normal rights of recourse would be open to users and businesses, although this is not specifically stated in the Framework.
Penalty structures for breach of regulation have not yet been proposed.
Given the Framework’s “decentralised” approach both to regulation and the definition of AI, there is potential for uncertainty as to which body or bodies regulate a particular AI system, the power of the relevant regulatory regime and the possible penalties, rights of recourse, etc . This has been acknowledged in the Framework as one of the challenges to the proposals, and we would expect it to be addressed in due course.
It is possible that the UK regime, if its final form is true to the “decentralised” principles set out in the Framework, may be less clear than the EU regime by the nature of its fragmented arrangement.
According to the Framework, the cross-sectoral principles will be put on non-statutory footing in the first instance. This is so that the UK government can “monitor, evaluate and if necessary update [its] approach and so that it remains agile enough to respond to the rapid pace of change in the way that AI impacts upon society”. That said, the Framework expressly does not exclude the possibility of legislation, for example, if it is necessary to grant new powers to, or ensure a coordinated and coherent approach between, regulators.
The UK government may look to update the powers and remits of some regulators but, notably, it does not consider that “equal powers or uniformity of approach across all regulators” is necessary. The Framework acknowledges the need for regulatory “coordination” in order to avoid contradiction, and also the need to ensure that regulators have access to the right skills and expertise.
9. Prohibited AI
At this stage, the Framework does not specifically prohibit any forms of AI. The EU regime may offer an insight into prohibited uses that the UK government might adopt. The EU regime specifically prohibits certain uses of AI, for example subliminal manipulation of persons, and real-time remote biometric identification systems in publicly accessible spaces for law-enforcement purposes without an exempted reason.
Currently, in the UK, abusive use of AI/algorithms by a dominant company is prohibited by the abuse of dominance provisions in the Competition Act 1998, and the government is in the process of bringing forward digital markets legislation which would curb abusive use of, inter aliaAI and algorithms by firms with strategic market status.
10. Current Stage and Next Steps
The Framework currently only sets out proposals, and the UK government is encouraging stakeholder views on how the UK can best set the rules for regulating AI in a way that drives innovation and growth while also protecting citizens’ fundamental values.
Developers, users and other participants in the AI value chain should consider making their views known, and can do so by sending their contributions to [email protected] The deadline for submission of views and evidence is 26 September 2022.
The UK government will consider how best to refine and implement its approach over the coming months, specifically in relation to the elements of the Framework itself, how to put the elements of the Framework into practice and how to monitor the implementation of the Framework.
The results of these considerations, and the next iteration of the Framework, will be set out in a White paper which is due to be published in late 2022.
If you have any questions concerning this alert, please contact:
1 The Policy Paper does not define exactly what it means by “cross-sectoral” principles, but this is understood to mean principles that will apply to all AI regardless of what industry sector or setting it is used in.