On 28 June, the Government published its response to the House of Lords Select Committee’s report on the future of AI in the United Kingdom . The Committee had been appointed to consider the economic, ethical and social implications of advances in artificial intelligence and to make recommendations to the Government.

The Committee made numerous recommendations, as part of a lengthy (181 page) report, and focussed on five key themes:

  1. AI’s current and future impact on people’s everyday lives
  2. Potential opportunities presented by AI in the United Kingdom
  3. Risks associated with AI and how to avoid those risks
  4. How to engage the public in a responsible manner about AI
  5. Ethical issues associated with AI.

In its response, the Government emphasised the importance of AI, an area of “seismic global change to which the UK needs to respond to lead” . The Government’s reaction to the report was broadly positive and the Government accepted many of the Committee’s recommendations, although (not surprisingly) was generally not able to promise specific solutions.

What is “Artificial Intelligence”?

For the purposes of the Committee’s report it adopted the following definition of “artificial intelligence”:

“Technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language… usually, [with] the capacity to learn or adapt to new experiences or stimuli”.

The report therefore focused on AI in a broad sense and not solely on machine learning, despite its current prominence as a subset of artificial intelligence.

Recommendations and responses

This article examines a (non-exhaustive) selection of Committee recommendations that are of particular interest from a legal perspective - legal liability, regulation, transparency, data and bias - and summarises the Government’s responses.

Legal liability

Whilst the Committee’s report identified a number of important issues, more questions were raised than answered on the tricky topic of legal liability. Understandably, the Committee did not reach a conclusion as to whether or how the law in the United Kingdom should be changed to allocate responsibility for the consequences and deal with liability that arises when, inevitably, AI solutions go wrong and cause harm. There have been numerous recent high profile examples of AI going wrong, such as the fatal crash of Uber’s self-driving car and Facebook’s AI-powered translation service that translated “good morning” in Arabic to “attack them”, resulting in the arrest of a Palestinian construction worker by Israeli police.

The Committee’s report considered who should be responsible for harm caused by AI – the software developer, the person who validated the software, the company that used the AI or the person who provided the data? The Committee was concerned that it is not clear how current legal principles should be applied. It considered the principles of reasonableness (relevant in the context of negligence and the duty of care and also often a contractual measure of care and skill) and foreseeability (important in establishing liability in tort or under a contractual claim). Given the complexity of the algorithms involved and unpredictability of output from AI solutions, the principles of foreseeability and reasonableness entrenched in current law are likely to be difficult to apply.

However, despite the difficulty and potential for lengthy litigation, the current principles should still be capable of application. For example, although (almost by definition) the outcome of a complex AI algorithm cannot be predicted precisely, it should still be possible to determine whether the person who has built the solution has acted reasonably for the purposes of a contractual or tortious claim (e.g. by performing reasonable tests of the solution). And, although the specific harmful consequences in particular circumstances may not be foreseeable due to the complexity of the solution, the likelihood of harm should still be foreseeable.

Nonetheless, clarity in this area would be helpful and, as such, the Committee recommended that the Law Commission considers the issues further as soon as possible and proposes any required changes to existing legislation. The Government welcomed this recommendation and promised that the Law Commission will be engaged to determine the best course of action.

Regulation

The Committee suggested that blanket AI-specific regulation is not appropriate at this time (partly because neither the Committee nor the witnesses it consulted had workable suggestions as to what AI-specific regulation would look like). Instead, it recommended that regulation should be considered on a sector-by-sector basis by existing regulators and that gaps in current regulation should be identified and filled. The close connection with the GDPR and the jurisdiction of the UK Information Commissioner was used as an example of this approach working well; suitable, as one of the key concerns with AI is that it could result in large sets of data being misused. However, work will need to be done to ensure that the Information Commissioner’s Office and other existing regulators are adequately resourced to cope with the additional burden this would create.

To encourage investment and growth in the sector, certainty as to the regulatory framework and risks involved would be valuable. For AI to gain the public’s trust, it will be important that responsibility for its failings is clearly allocated or, if nobody is deemed legally responsible, those who suffer harm are compensated. However, this should be balanced with the need to prevent over-regulation and the stifling of innovation and, in any event, in light of the current trajectory of AI growth, it may be a struggle for regulation to keep pace with development.

The Government agreed with the Committee’s recommendations and is establishing a working group to consider future regulation. It also offered support and funding for sector-specific regulation.

Transparency

To help build public trust in AI, the Committee recommended that consumers should be told if, and how, AI is used to make decisions about them: “it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take” . The Committee acknowledged that, at present, this will not always be possible (for example, when using deep neural networks often described as “black boxes”) and recommended that deployment of these solutions should be delayed until they can be explained. Again, the influence of the GDPR and one of its central tenets was clear: that an individual should be free to make an informed decision about how his or her data is processed.

The Committee also recommended that, for some safety-critical solutions, technical transparency (i.e. the ability of an expert to understand how a system is built) should be imperative and should be required even at the expense of the functionality of a system. Strong development was encouraged, but not at all costs.

However, the Government was reluctant to endorse the Committee’s recommendations wholeheartedly and is keen to avoid deterring development and use of AI by over-emphasising the need for transparency. It pointed to the GDPR mechanisms that already give individuals protection and emphasised that the issue will be explored further by the, new, AI council. For now, however, the Government’s view is that there should be no general regulatory requirement to inform the public of how and when AI will be used to make decisions about them.

Data

Access to data is a crucial driver of AI development. Deep learning solutions, in particular, are only as good as the data they run on. As such, the Committee recommended that “wherever possible and appropriate, with regard to its potential commercial value, publicly-held data should be made available to AI researches and developers” . The Committee raised concerns that large corporations (such as Google) will become dominant in the field of AI, limiting the opportunities of others as a result of exclusive access to, and the wealth to purchase, huge datasets.

The benefits of sharing NHS data were highlighted by the Committee, given the size of the dataset and the diversity of the population whose data is held. However, the commercial and legal ramifications will need to be considered further. At what price should public bodies provide datasets that will be used to develop solutions that will then be sold back to the very same bodies? If a commercial rationale can be established, the legal basis on which data may be shared would also need to be established. The public would need to be satisfied of the value of the disclosure and the manner in which data could truly be anonymised.

In its response, the Government recognised the importance of data sharing and that it may be necessary to put in place measures to allow more equitable sharing of public data. It is considering data sharing frameworks (such as data trusts) and the mechanisms and terms on which data should be shared, but also pointed to the large volume of open data that is already published by Government departments. It concluded that more research is needed to understand the commercial value of public data and to decide whether the promise of transparency should be a condition of its release.

AI bias

One significant use of AI is to enable the instant and effective assessment of individuals to inform decision-making. For example, AI solutions can be used to help financial institutions make decisions about the creditworthiness of mortgage applicants or to help an employer make decisions about which job applicant to employ. However, the Committee highlighted the concern that AI solutions can be biased and perpetuate the prejudices of software programmers or the prejudices on which pre-existing datasets are built. Organisations relying on AI for decision-making purposes will need to be careful to avoid falling afoul of anti-discrimination legislation.

In 2016, for example, “Beauty.AI” was developed to judge an international beauty contest based on objective factors such as facial symmetry. However, it was left facing accusations of discrimination as it favoured white contestants as a result of the predominantly white dataset that was used to build its algorithm.

The Committee stressed the importance of developing tools to ensure datasets used to train AI systems are balanced and representative of the wider population and recommended that diverse programming groups are encouraged. The Government recognised the issues in this area and promised to work to address them, by raising awareness and ensuring diversity of the AI workforce. However, it did not offer much in the way of firm commitments.

Next steps

The Government’s response is now awaiting debate. There is clearly much to be done to ensure that the United Kingdom remains at the forefront of AI development, given the moral and technical challenges presented. However, the Committee’s recommendations and the Government’s responses are a positive step in the right direction. Real action is now needed.