What happens when the robots get it wrong?

This article is an extract from the Lawyers and Robots whitepaper. Click here to download the whitepaper.

As new technologies and automation start playing an ever more important role in the legal world, what are the risks in relation to negligence?

We spoke to Gary Lea, academic at Australian National University, Roger Brownsword, Kristjana Çaka and Professor Karen Yeung from King’s College London about issues of liability.



Technological progress always has about it a sense that change is happening at an unprecedented rate. In the case of automation, that notion seems to have about it an unusual sense of accuracy. As robots and algorithms take over responsibility for more and more daily tasks our lawyers, policymakers and software engineers are having to give increasing attention to the implications of these developments.

A significant reason for the scrutiny, and ever a key question in law, is what happens when things go wrong? Automation and technology brings with it unchartered understandings of human agency, and with these new understandings of agency we are beginning to see the potential for new kinds of negligence.


Robot-watching and unlimited liability

A central concern in understanding negligence is where ultimate fault might be said to reside. Much thought is already being given to the idea that the buck could stop with the developer of the programme. While it might seem abstract to consider that a programmer could be held liable for unintended consequences within a line of code, the thinking perhaps rests only on the intangible nature of code and software. We readily expect tangible goods and systems, such as cars or electronics, to be serviced and in full working order, without our understanding the intricacies of the software personally. So could developers be held responsible for damage suffered by a law firm using their technology?

Furthermore, if developers are concerned at the consequences they face in the event of being found negligent—should legal professionals be similarly conscious of the ramification if their oversight of AI is found wanting?

Karen Yeung is sceptical.

‘This is a question about proximity, and causation. My gut feeling at this stage is that it is unlikely, but that depends upon whether the damage meets the test of reasonable “foreseeability” in negligence—as with a firm being accused of overreliance on automation, it would depend very much on the facts.’

Artificial intelligence, however, is renowned for putting the subject of foreseeability onto very shaky ground indeed. The power of machine learning, and specifically its potential to work through near-infinite scenarios and points of removal, creates problems in defining what precisely constitutes ‘foreseeable’. A robot that trawls through data is equipped to discern patterns, connections—and thus foreseeability—that stretch human-scale understandings of negligence.

Gary Lea, an academic working on the regulatory impacts of AI, is pragmatic in his assessment of developer liability.

The awareness of the risk of negligence, and its quite stark realities, is certainly affecting the way programmers do business. One programmer, who didn’t want to be named, suggested that he and others in the industry— who often work alone, in pairs, or as small groups—are increasingly registered as limited companies, and paying themselves by use of dividends, rather than setting up under self-employed status. The latter is often seen as preferential for its flexibility, but remains open to the risks of unlimited liability should there be mishaps that affect the large and valuable businesses they routinely undertake work for.

Casting her eye forward, Kristjana Çaka, assesses the difficulties at hand.

‘There are lots of conversations taking place, but identifying liability is always a sticky issue. Perhaps another path that might be explored is the idea of associating AI systems with a particular set of responsibilities.

‘Following this, it could well mean we can identify who is to be held responsible when a particular issue does arise. This line of thought allows you to come across various issues such as the difference between a system and a product and whether, from a legal perspective, there should be a difference between the two and whether we should, in some sense, personify such systems with responsibilities accordingly.’


Double standards

It is beyond doubt that automation and machine learning will affect standards in legal proficiency. Most obviously, this will be because of a need to ensure that automation can meet existing human standards, but its impact will also be felt in the opposite direction, based on a premise that automation can be used not to replace but to enhance the competency of human lawyers.

Roger Brownsword, who sits on the Royal Society’s working party on machine learning, makes the point clearly:

‘There will be a need to figure out a workable legal approach if lawyers find themselves sued for professional negligence where—in the first instance—they are claimed to have over-relied on machine learning but also—in a second plausible scenario—where they are claimed to have under-relied on the AI that is available.’

As outlined in Richard and Daniel Susskind’s book, The Future of the Professions, we are set to see white collar work redefined in ways that have already taken place among the blue collar of the western world. If software does to law what mechanical automation has already done in factories, we are entering a phase in the history of employment in which even lawyers—no matter their skill, training and traditional social status—will face new pressures.

There will be a need to figure out a workable legal approach if lawyers find themselves sued for professional negligence where—in the first instance—they are claimed to have over-relied on machine learning but also—in a second plausible scenario—where they are claimed to have under-relied on the AI that is available.


Contact our Experts Now


Reed International Books Australia Pty Ltd and our affiliates may further contact you in your professional capacity about related products, services and events. You will be able to opt-out at any time via the unsubscribe link provided within our communications. For more information, see our Privacy Policy.