What’s artificial about ethical AI in the law? Everything


A guest post by Andy Neill, senior product manager at HighQ

Neill: Firms should put an ethics panel in place

With many companies and people around the world relying on artificial intelligence, considering the implications of how AI programs can act and affect outcomes (positively and negatively) is too important to avoid. And people, not technology, have the answers.

Discussions will continue on what role ethics has in the deployment of AI, but some guidance already exists.

In 2017, the House of Lords appointed a select committee on AI. The role of members is to consider the economic, ethical and social implications of advances in AI.

As part of its 2017-2019 session report, the committee suggested the following AI code:

  1. AI should be developed for the common good and benefit of humanity.
  2. AI should operate on principles of intelligibility and fairness.
  3. AI should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in AI.

This code is similar to Isaac Asimov’s laws of robotics created in 1942. Seventy-six years down the line, narrow AI now actually exists and has progressed further than it would have been possible to imagine so long ago.

How does ethical AI impact the legal industry?

A fundamental part of the law focuses on justice and ethics.

At a recent Law Society event, Professor Richard Susskind recommended that AI practitioners become familiar with the study of ethics, as he did as a law student.

He suggested that one cannot look to regulation, to the letter of the law, to tell you what is ethical and what is not; you can only use the law to find out what is ‘legal’ and what is not.

There’s also a question of morality, or whether you ‘should’ do something, compared to whether it is permitted.

When designing a legal AI system, outcomes must be described in terms of features — what the system will do in response to an action. In any system, unexpected outputs are treated as bugs or undesirable outcomes.

In regular programming, one then reviews the code to find the logical programming error that a human has made, and changes it, so the bug is fixed. In AI, specifically machine learning, the system has created its own logic, so the code cannot be changed.

So with legal AI, we have to either restrict the techniques used to build artificially intelligent programs to those that can be interrogated to understand their reasoning, or restrict the domain we’re working on to one where there is significant human oversight.

In practical terms, that means focusing on the extremes in any given case, not on average or typical inputs when trying to find bugs, because an ethical bug is most likely to appear at the margins.

How can we ensure that legal AI is ethical?

The test criteria for AI systems dealing with legal matters should be explicitly defined to include scenarios revolving around the marginalised in society.

Law firms dealing with volume insurance claims are a prime example. In order to make their processes as efficient as possible, these companies create an AI system that automatically classifies claims as ‘settle’ or ‘contest’.

The system is trained based on thousands of previous examples. They’re confident that it can predict which category a claim should sit in.

Before going live, they must ensure that their newly formed AI has not accidentally learned the wrong lessons, therefore putting an unethical AI agent to deal with real human dilemmas. The issue could be that the AI has found some hidden metadata, or patterns, in their training data, meaning it made the right outcome, but for the wrong reasons.

For example, let’s say that the AI system learned that cases from particular postcodes should always be contested, rather than settled.

To avoid this, before the AI agent is created, the firm should check with its ethics panel (a management panel with oversight to approve) the terms of this AI agent, and what constraints there should be.

If there isn’t an ethics panel in place, the Law Society recommends that there should be. This would review proposed AI solutions, discuss their dimensions and approve them if they meet ethical standards.

Following the creation of the AI agent, there should be a round of ethical testing, alongside functional testing, to ensure that the AI has not learned the wrong lessons. Humans need to think about unethical outcomes of the newly created AI, and therefore must test to ensure that ethical boundaries are not being breached.

This is a prime law for lawyers who are trained in ethics, not for AI designers and engineers, who are trained in computer science, maths and physics.

The aim of this is to uncover unethical behaviour by the AI, and try and trip it up by giving it deliberately biased data, to see if it comes up with biased outcomes. By retrospectively uncovering the rules that the machine learning algorithm has internalised, you can make sure that they’re going to pay attention to the correct facets, and not to bias or unethical features, such as a claimant’s race or gender.

Lawyers are trained and have studied ethics. They must be utilised to succeed in creating ethical AI programs.




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Loading animation