AI: Changing Rules for a Changed World

May 14, 2018

The world has changed, in at least that the significant advantages of AI for certain applications means that AI seems here to stay. Like Hal 9000, AI seems likely to be put to the fullest possible use and to make some poor decisions along the way.

Legislative changes such as GDPR will bring changes to the applicable rules in relation to AI enabled gathering and processing of personal data. In other areas, such as AI enabled automated vehicles, specialist legislation seems likely and advisable. Changes seem likely in the common law too. For example, common-law rules in relation to negligence and vicarious liability are of cross-sectoral application and will, doubtless, be updated and applied in the modern context by the courts.

Taking the example of the banking or FinTech sector, this article discusses AI and some of the legal rule changes we are currently experiencing and are likely to see in the future. [1]

Artificial Intelligence (AI)

Machine Learning algorithms learn from data (in the sense of categorisation, optimisation and statistical prediction) and feed into so-called AI applications that can deliver solutions in ways that mimic human cognitive skills. Examples include apps that recognise images, respond to customer insurance product queries or guide customers through mortgage application processes.[2] Machine Learning has been around for many years, and many AI tools use well known statistical techniques, such as linear regression models. Continued increases in computing power make AI applications increasingly effective, cost-efficient and suitable for online delivery to end-users. As a result, AI applications are becoming increasingly prevalent in consumer-facing sectors such as banking and insurance.

AI in banking

For most UK based consumers, banking services have changed dramatically even in the past decade. The table below lists just a few changes you may have noticed
since the olden days.

20th
Century banking

21st
Century banking

Principally
face-to-face interactions, in branch

Little
face-to-face interaction, as interactions are mainly online or through cash
machines

Days
between payment instructions and receipt of funds

Faster
payments and immediacy expectation

Decision
maker is your bank manager, based on personal knowledge

Decisions
by automation, using secret bank algorithms and AI-enabled credit scoring
systems

Information
on file in branch

Data kept electronically,
frequently available online

Bonnie and
Clyde take your savings at gunpoint

You lose
your savings to hacking, phishing, vishing, smishing, pharming, and ‘man in
the middle’ attacks

From a bank’s perspective, AI is game-changing technology. AI reduces the human labour required to interact with high volumes of customers across large portions of a bank’s operations. AI assists in customer interactions from AI enabled customisation of websites and content,[3] AI-enabled ‘chat bots’ through AI-based credit scoring to helping meet a bank’s ‘know your customer’ or KYC obligations. The KYC process can be relatively labour intensive and AI tools can offer key efficiency advantages to a bank (for example in matching photographs and documents, checking originality, and by flagging individuals of concern for further human scrutiny often by cross-referencing against social media and police registers of offenders). AI is, also, used in the FinTech sector for analysis of the likely market impact of taking large trading positions, back-testing trading models and optimising trade execution. Finally, we see an increasing use of AI in surveillance, fraud and hacking detection. Doubtless, there will be increases in the use of AI by hackers also.

AI-related litigation risks in banking

It is clear that the use of AI (as with any new technology) can result in new litigation risks arising.

Many are concerned that AI approaches to credit scoring could systematise unfairness and discrimination. For example, an AI system might repeat historical human bias where it ‘trains’ on an historical transaction dataset in which people with certain characteristics were generally offered bigger loans by bank staff than people without those characteristics. An AI system might also enable the processing and analysis of large amounts of data from disparate sources, such as social media messaging, to inform credit scores. This sounds helpful to the bank, improving the accuracy of credit scores. This might, also, be helpful to customers such as those with little credit history. This approach may result in unexpected correlations, some useful and some potentially discriminatory in the sense of enabling the identification of protected characteristics not explicitly disclosed by loan applicants and feeding these into decision-making (whether directly and expressly or indirectly, by correlation). Yet further issues might arise where an AI system becomes set in particular routines, failing to identify new patterns or where an AI ‘overtrains’ on its past experience and then misses a new event, the likelihood of which seems obvious to a human operator with awareness of human ingenuity. And, as with other IT systems, AI systems may have bugs. This could also lead to liability arising as between the bank and its customers and as between the AI provider and the bank.

Naturally, AI providers are aware of these potential issues and seek to develop strategies to overcome them although, unfortunately, these strategies fall outside the scope of this article.

AI-related regulatory changes: GDPR

According to a recent tweet from one AI researcher, anonymised for purposes of this article, ‘Starting May 25, the European Union will require algorithms to explain their output, making deep learning illegal’ presumably due to the concern that it will not be possible to explain deep-learning processes adequately. In our view, the concern is misplaced.

In brief, the GDPR requires [4] that individuals should not be subject to a decision based only on automated processing that is legally binding or which has a significant effect on them, unless the processing is necessary to enter into or perform a contract, the individual consents or the processing is allowed by a national law with suitable safeguards to protect the individual’s rights, freedoms and legitimate interests. For all such decision-making, except that based on a law, the data subject must be at informed of (i) the logic involved in the decision-making process, (ii) their right to human intervention, (iii) the potential consequences of the processing and (iv) their right to contest the decision reached.

It seems clear that many AI applications in banking would have a ‘significant effect’ on consumers, for example, approving or rejecting loan applications. In its online Q&A style guidance on this topic, the European Commission states ‘…processing can significantly affect an individual if it influences their personal circumstances, their behaviour or their choices (for example an automatic processing may lead to the refusal of an online credit application)’. However, it seems equally clear that much AI-related processing would fall within the allowable exceptions. For example, presumably most applicants for an online credit application
would explicitly consent to the automated processing of their data. Moreover, it seems reasonable to presume that automated processing would be deemed necessary to enter into or perform such an online credit application and enter into a resulting contract.

Yet further, there is more than one way of offering an explanation of AI-enabled decision making. One type of explanation would be around the technical features of the AI system, another would be about the rationale for a particular individual decision.[5] For example, ‘Our system considers a number of factors, including your age. You need to be 18 years of age or older to qualify for a loan. As you are not over 18, your loan application has been automatically rejected’. The approach is, arguably, compliant with the requirement to provide information in relation to the logic involved in the decision-making process and does so in a way that is relevant to that particular applicant in that particular transaction, rather than providing a generic description of the underlying AI system or processes. Importantly, it seems to us that the approach is more likely to satisfy the majority of online loan applicants than more technical explanations would. We note that this approach does, however, raise a concern that the provision of information in relation to decision factors or decision boundaries could allow reverse engineering of AI system parameters, enabling wrongdoers to ‘game’ automated systems.

It is worth noting that the First Data Protection Principle, outlined in Article 5(1)(a) of the GDPR, requires that all processing of personal data be fair, lawful and carried out in a transparent manner. The requirement of fairness is that the method of personal data collection (which is a form of processing of personal data) must not be misleading. And the processing must be fair to all data subjects.[6] In an age of social media, it may be that automated collection of information on persons who are your social media connections might be unfair to those persons, even if not unfair to you because you have explicitly opted in (Cambridge Analytica beware).

Obviously, the provisions of the GDPR are yet to be tested in the English courts. Whilst there may be some surprises along the way, it is to be hoped that judges will
apply the provisions in a pragmatic and workable fashion.

Negligence: AI on trial?

In our example sector, banking, a bank owes a general duty of care to its customer that the bank will observe reasonable skill and care in and about executing its customer’s orders. The case of Barclays Bank plc v Quincecare Ltd and another [1992] 4 All ER 363 establishes that, at least in relation to company or organisational bank accounts, a bank should not exercise a payment order from a customer if it is put on inquiry, in the sense that it has reasonable grounds (although not necessarily proof) for believing that the order is an attempt to misappropriate the funds of the payer.[7]

This is all well and good, considering the prototypical 20th century bowler hat wearing banker and may still hold true in high-value investment banking
services where services are delivered by human operators.[8] But what does this test for negligence mean for the 21st century, AI-enabled banking operation? Will we see
situations where there is a ‘trial of the AI’ in the sense that the question will become not what would the fictional ‘reasonable and honest’ human banker have known and done but was the bank negligent by not having an industry standard (or under GDPR ‘appropriate’) AI system putting the bank on notice that, statistically, ‘there was a serious or real possibility, albeit not amounting to a probability’ [9] that a payment instruction was not valid?

In addition to the adequacy or appropriateness of AI systems themselves, questions may arise over the extent to which humans are entitled to rely on AI systems in various contexts.

Vicarious liability: Sidestepping the GDPR?

Vicarious liability is another area where we see the likely impact of existing legal rules of cross-sectoral application to AI. The liability is one in which one party without personal fault is held responsible in law for wrongs committed by another. Vicarious liability is imposed most frequently in the context of a faultless employer and wrong-doing employee. The doctrine of vicarious liability applies, also, in relation to breaches of statutory obligations and equitable wrongs that sound in damages: see Majrowski v Guy’s and St Thomas’ NHS Trust [2006] UKHL 34

In the context of online AI tools deployed to high volumes of consumers, there is naturally the potential for a high volume of breaches, for example caused by a rogue employee developing a rogue AI or deliberately introducing errors into an AI. A bank, in principle, could be vicariously liable for an employee’s breaches of data protection obligations sounding in Vidall-Hall[10] type damages for emotional distress.

In Various Claimants v Wm Morrisons Supermarket PLC [2017] EWHC 3113 (QB), a Morrisons employee published the personal data relating to 99,998 of Morrisons’ employees. After a very thorough review of the technical and organisational measures Morrisons had in place, the judge found he could not fault Morrisons, save in one non-causative respect. Morrisons were able to establish positive compliance with the statutory requirements as to their technical and organisational measures but were nevertheless found vicariously liable for the publication of the personal data. Applying this to the banking sector, industry leading AI and other IT systems may be of no protection against vicarious liability when a rogue employee extracts personal data. The Morrisons case is currently on appeal.

In some circumstances, consultants may be acting as employees, raising the possibility that several employers will all be vicariously liable for the acts or omissions of a rogue ‘employee’,[11] for example in delivering a complex AI-based solution.

AI-related litigation

Currently, leading players appear to be collaborating to pump-prime the AI market [12], however there may be major IP-related disputes once there is sufficient market value to litigate over. Historically, as previously nascent technologies become established, intense and protracted legal disputes have arisen between competitors staking their claims to future value, including by intellectual property related ‘land grabs’. This is true across areas as diverse as aircraft [13], sewing machines [14], and, more recently, mobile telephones. [15] We expect that the same will be true of AI, where currently a handful of players deliver most of the industry-leading solutions.

There will, naturally, be claims by consumers made against AI-enabled service providers. And, within the AI ‘industry’ there are also likely to be follow-on disputes between AI suppliers and their business customers, disputing liability to end-user consumers. Many of these disputes will involve complex cross-jurisdictional issues. In the banking sector, these disputes are likely to be further complicated by the involvement of parties who are not subject to the same level of regulatory oversight as the banks.

We have mentioned, above, issues around the changing legal rules occasioned by the GDPR and other relevant legislation and the questions that are likely to arise
in tort-based claims. One tension which is already apparent in the banking sector is the modern move towards greater transparency (including the provisions of GDPR) and the natural desire of financial organisations to keep their algorithms and processes secret. A credit score arising from an AI ‘black box’ could be genuinely difficult to describe to regulators and courts without disclosing critical trade secrets.

Yet further questions are likely around the meaning of undefined terms in the GDPR in practice. Taking just one example, ‘appropriate … measures’ – it may be that
what is appropriate in one context, say, a very high value banking transaction, may be more than is necessary in another context.

In addition to litigation around these questions, we expect further claims arising out of the alleged vicarious liability of service providers and their suppliers.

Conclusion

With AI here to stay, AI driven changes to our legal rules and their application seem inevitable, as do disputes arising from AI applications deployed in a large variety of contexts. Nonetheless, to paraphrase Hal 9000, we think that the mission to develop appropriate legal rules for the application of AI in FinTech and other industries can be approached with enthusiasm and confidence.

Terence Bergin QC and Quentin Tannock are Barristers at 4 Pump Court, a barristers’ chambers with expertise in areas including information technology, telecommunications and professional negligence.



[1] For
those readers interested in impacts on the legal services sector, The opening
keynote address at this year’s SCL annual conference is
Tomorrow’s Cyborg Lawyers:
2018 to 2045…’. And, for an example Cyborg Lawyer, see:
https://en.wikipedia.org/wiki/HYPO_CBR.

[2] John C. Buyers’ 2017 article Artificial
Intelligence the real legal issues
for SCL (Computers & Law, Aug/Sept 2017, p 19) gives a good overview of
practical applications of AI and discusses how attributing fault becomes more
challenging the further decisions are removed from direct programming. The article
is available at
www.scl.org/articles/8966-artificial-intelligence-the-real-legal-issues.

[4] The GDPR Articles with particular relevance to
AI are Articles 4(4), 13 to 15 and 22. Relevant to interpretation of these
Articles are Recitals (71) and (72) and the Article 29 Working Party Guidelines
on Automated individual decision-making and Profiling for the purposes of
Regulation 2016/679 (WP 251).

[5] Wachter, Sandra and
Mittelstadt, Brent and Russell, Chris,
Counterfactual Explanations Without Opening the
Black Box: Automated Decisions and the GDPR
(6 October 2017). Harvard Journal of Law &
Technology. available at SSRN:
https://ssrn.com/abstract=3063289

[6] See, for example, Innovations (Mail Order) Ltd v Data Protection Registrar [1993] UKIT
DA92_31491
, a case
decided under the 1984 Act but still likely to be relevant.

[7] Quincecare
(ibid) at 376.

[8] See, for example, Singularis Holdings Ltd (In Official Liquidation) v Daiwa
Capital Markets Europe Ltd
[2017] EWHC 257
(Ch)
and
the comments of Rose J at [171].

[9] Parker LJ in Lipkin Gorman at 1378 held that the question posed under the Quincecare test must be whether, ‘…if a reasonable and
honest banker knew of the relevant facts, he would have considered that there
was a serious or real possibility, albeit not amounting to a probability…’ that
the order is an attempt to misappropriate funds.

[10] Vidall-Hall v Google [2015] EWCA Civ
311

[11] The Supreme Court saying that the test is
whether the employee
is so much a part of
the work, business or organisation of both employers that it is just to make
both employers answer for his negligence
Various
Claimants v Catholic Child Welfare Society
[2012] UKSC 56.

[13] Herbert A. Johnson, The Wright Patent Wars and
Early American Aviation, 69 J. Air L. & Com. 21 (2004)

[14] Allen Pusey, Sept. 10, 1846: The Sewing
Machine Patent War, ABA Journal, September 2015.
http://www.abajournal.com/magazine/article/sept._10_1846_the_sewing_machine_patent_war/

[15] The so called Apple v Samsung ‘patent war’,
which possibly ended last year when the American Supreme Court ruled that it
would not hear an appeal of the US patent infringement dispute between the
companies which had been running since 2014.