You Can Call Me A.I. - Artificial Intelligence as a Witness
You Can Call Me AI

You Can Call Me A.I. - Artificial Intelligence as a Witness

We need to approach advanced artificially intelligent systems in the same way as human witnesses and actors involved in litigation and investigation

It's a heavy burden.

Continuously feigning excitement at developments in the field of Artificial Intelligence.

And I don’t find this stuff amusing anymore. I am quite literally surrounded by the sound. The sound of… White-Robot-themed noise.

No alt text provided for this image

I use the term ‘developments’ very loosely. Personally I began working with machine learning and neural networks back in 1997 at Brunel University. The concept of Neural Networks is widely credited to Frank Rosenblatt from waaaay back in 1958. So in my view the development in the broad field of "Artificial Intelligence" is neither break-neck paced nor even remotely new.

If you listen to some of the noise generated in media and marketing, it would have you think that Artificial Intelligence is only a recent concept and that suddenly anyone and everyone is ‘doing it’.

A slightly modified version of the classic Dan Arley facebook post: “Artificial Intelligence is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it...” squarely applies here.

In the LegalTech world it's no different, and reasons to “fear” AI are hugely over emphasised. I am happy to say that seems to be relatively balanced from within my Deloitte bubble, but I am quite sure that some of the sentiment of marketing material alludes to the fear factor. “You don’t know so much about this scary thing, so roll up, roll up.”

That’s understandable, it’s a commercial world, but believe me when I say that Deloitte and some others are further ahead in developing and utilising advanced and innovative solutions to solve very specific client problems than many of these clowns. And much further than you might think.

True to form though, I have to be honest. 60% of my own time and energy is focused on internal administration, development of processes, financials, commercials and most of all with communication and development of people, but thankfully I do get to play with Tensorflow and Elasticsearch from time to time. So it is at least feasible that I could know at least something of the topics I'm about to cover.

Get these mutts away from me

To keep myself sane and away from the vast amount of noise, I've been limiting my recent reading to focus on the legal and regulatory challenges with AI.

Obviously at the same time listening to my Hits 5 album from 1986 (which I first heard on my epic Sony WM-F60 Walkman with AM/FM Radio, Auto Reverse and Dolby B).

No alt text provided for this image

The track listing was seminal and IMHO, 80's defining. The song “You Can Call Me Al” by Paul Simon was the track for me. The context of a confused man in a midlife crisis I thought for some reason was also apt for shaping an article about the regulation of AI in amongst all the hype.

In my reading over the past few months, I have broadly settled on five areas of ethical concern that are raised with respect to Artificial Intelligence. I don’t intend to tease through each one, save to say that the ONLY answer to each of these is for governments to see what is going on and direct regulation to be ahead on these topics.

These are all serious concerns of mine. The links below are quite good starting points to explore these ethical issues.

1.    Unemployment

“transformations, if managed wisely, could lead to a new age of good work, good jobs and improved quality of life for all, but if managed poorly, pose the risk of widening skills gaps, greater inequality and broader polarisation”

2.    Privacy

“AI has the potential to revolutionise societies in positive ways. However, as with any scientific or technological advancement, there is a real risk that the use of new tools by states or corporations will have a negative impact on human rights, including the right to privacy.”

3.    The Rise Of The Machines

"primitive forms of artificial intelligence developed so far have already proved very useful, but [Prof. Hawkins] fears the consequences of creating something that can match or surpass humans."

4.    Malicious use

“It seems unlikely that interventions within the AI development community and those within other institutions, including policy and legal institutions, will work well over the long term unless there is some degree of coordination between these groups”.

5.    Algorithmic Bias

“it is still humans who are developing the underlying systems. Attempting to avoid bias without a clear understanding of what that truly means will inevitably fail”

However, while people are seemingly obsessed by...

  • watching the fiasco of Brexit unfurl,
  • reading the latest Donald Trump twitter rant,
  • posting selfies to Facebook, Instagram; or
  • posting video to LinkedIn (in “the driver’s seat” while extolling the virtues of how they became a seminal business leader by posting videos of themselves on LinkedIn “in the driver’s seat” - you know the highly valuable viral posts).

…AI and associated technologies are moving along at an unprecedented rate. I am not talking about the plethora of .io websites cropping up that do ‘cool things’ with textual analytics (aside from one or two).

I’m talking about the research and development that you can see if you really look. A simple search of Google Scholar or Google Patents reveals some glimpses into how far things are going.

No alt text provided for this image

Some of those are future thinking, but one look at, for example, Microsoft’s whole suite of Cognitive APIs in the Azure stack, shows you how far things have already come. Video Indexer is a prime example of an analytical and machine-learning technology, developing at an extremely fast rate and can already be used for many practical purposes.

Don't want to end up a cartoon, In a cartoon graveyard

When adding to the mix the currently “cartoon-ish” Virtual and Augmented Reality, the only conclusion that one can conceivably make is a convergence of these technologies to the point of creating the genuine appearance of reality and of consciousness. I can't see how anyone can counter that claim.

A bold statement I know. Believe me though; with the advent of Quantum Computing and Quantum Networks (see my in-depth explanatory article here) we are in for a huge step-change in the next decade. Regulation and law enforcement have always and will always be falling behind.

This inevitable convergence of technology and the unavoidable acceleration toward emulating human intelligence and emotion, leads me to the area I have in depth expertise in. The area that concerns me most of all is the ability to effectively investigate the inner workings of the forms of AI being created.

Doesn't speak the language

In the past 17 years, I have seen and been involved in the investigations of many large scale and frankly shocking situations, whether that's market abuse, fraud, anti-competitive behaviour, IP theft or litigation and arbitration (where at times there is subterfuge and dishonesty abound).

For the lawyers and law enforcement involved, there is one clear commonality that needs to be understood to get the real picture: Human Behaviour.

In examining the Who, What, When, Where, How (and occasionally getting an understanding of the 'Why'), investigators and litigators look instantly to electronic records and digital footprints to unearth the evidence and relevant documentation and communications that help to build a picture of the subject in dispute.

The information comes from conceptually simple and tangible locations. Enterprise Systems, File Servers, Cloud based systems, in-memory analysis, mobile phone images, data sequestered from the likes of Facebook, Apple, Instagram, Dropbox or even the location data from a drone.

It’s great to see how far that’s come. Lawyers are beginning to understand much more in-depth the challenges with data and how that can lead to answers for the questions they have.

But what of the ‘Machines’? Who from the governments, the regulators, the lawyers or the investigators actually speak the language of these systems?

I can be your long lost pal

Here is a collection of some of the 'noise'. Taken individually and without proper research some of the provided links are just over-hyped with little basis in reality.

But collectively they paint a picture supporting my current and future thinking of the necessary approach in investigations. The 'machines' (I'll call them systems from here on in) make decisions:

Sometimes bad ones.

They own and control data .

The can act relatively autonomously already

They can be given emotions

They can hide things

They can lie

They can cheat for reward

They can create their own languages

They can collude.

With the hype around them and commercial opportunity driven by them, they are often talked of in a deferential manner, which could easily fit the description in this excellent podcast of a ‘strong personality’.

It is maybe 'too future thinking' for many, but to me, the above traits sound suspiciously like many of the characters I’ve met in litigation and investigation related activities.

No alt text provided for this image

My opinion and the driver for this conceptual article, is that we should begin to approach these ever complex and integrated systems as we would humans.

Human regulations should apply conceptually, human approaches to investigation of the decisions made by the system should be taken and further, the equivalent human consequences should be suffered. You’ve seen War Games.

An advanced system that incorporates elements of Artificial Intelligence could be said to be making decisions. They are therefore a constantly changing stakeholder, actor or player, in your investigation. And not just a static source of data.

“That’s a paranoid and conceptually weird thought, Martin!” you say while wondering why I took the time out of my day to write this. And why you took the time to read this far.

I am not saying that we need to panic or worry about the rise of the machines (not yet anyway) but what I am saying is that we should approach technologically advanced systems with the understanding that they are as complex and useful as any person in the context of investigation.

Here are some of the things I personally think that lawyers or investigators might want to understand (at least more in depth than they currently do).

Understand their drivers.

  • Take on appropriately qualified experts to assist in understanding the system you are concerned with.
  • What is the purpose of the system? Collect evidence to back up this assertion (communications, documentation etc)
  • Who created the system and what are their thoughts on your investigation or of the system generally?
  • Consider how the decisions are made by the system. Is it a symbolic, sub-symbolic, statistical or even 'Cybernetic'? Textual or other Analytics, Machine Learning, Deep Learning, Neural Networks, Neuro-Linguistic Programming? Or a combination of some or all of these? What algorithms and efficiencies were applied? A sound understanding of the design, capabilities and boundaries are the preserve of true data scientists. So hire an independent one.

Collecting evidence from them

  • Forensically sound capturing of data is great, but how has the system been formed since inception, Where did it come from? What was it's progression? How many releases?
  • Consider taking snapshots at different points in time. How is data changed between these points? What algorithms resulted in those changes?
  • Analyse those changes based on what you understand about the data structures and the intended use of the system
  • Many systems may not be as simple as a straightforward "Forensic imaging job". You need to consider how data structures are changing.
  • AI systems operating on a large scale typically have high performance requirements. That means scalability (Object based storage, parallel architectures, multi-cloud based, 'in-memory' and globally distributed hyper-scale storage systems - sometimes even purpose built). Carefully consider the most cost effective and complete capturing of the necessary information without being convinced by 'it's too complex' and 'you wouldn't understand'.

Interview them

  • In today’s basic systems, applying the concept of interviewing could simply be testing out the responses of a chat-bot in normal use cases. One of my favourite things to do is ask them about their understanding of my privacy rights, for example.
  • In the not too distant future, this could take the form of the same witness interviews you already are expert in. Asking IBM Watson based systems some questions, while presenting facts and positing theory might well give you the answers you need or additional colour to the conclusions you have already made.
  • Use the same sceptical and analytical approach you are familiar with for your usual witness interviews.
  • Understand the 'day job' of the system, just as you would in a normal interview, what process are they involved in, what are the expected outcomes? Understand their 'worldview' in relation to their role in the investigation.
  • What do they do 'out of hours'? Does the system wear multiple hats or utilise other data sets that could compromise it from an ethical standpoint? Could the data be contaminated? What is the security infrastructure like that would prevent tampering? Hire a systems security expert to help assess this. Just as you would hire an expert to help you understand more of any subject area.
  • Understand the applicable regulations and internal policies and procedures to which they should be adhering. What are the responsibilities of the system given their role? Is the system complying with these 'ethical obligations'?
  • As with witness interviews, you should also understand who or what the system works with. Other systems may be working in parallel, being instructed by or giving instructions to the system you are investigating.

It's not his first time around

This is all so laughably conceptual of course, but my first experience of analysing source code in an IP litigation was in 2007.

Back then it was the simple functional comparison of two object oriented Java based computer programs. The world has moved on far, far beyond the simplicity and relative ease of that process and into a world where it is necessary to start to question the decisions of systems on the (conceptually, at least) equivalent level of human actors in investigations.

I am hoping the article is at least a thought-provoking stroll - both into the wondering mind of a legal and technology generalist and into my personal opinion as to where regulators, lawyers and investigators need to be thinking in order to stay relevant and ahead in the ever-progressing “world of work”.

With that, I will duck back down the alley.

HTH,

Martin.

Disclaimer: This is all my own opinion and experience and isn't necessarily reflective of the views of my current or previous employers. It may have been written by a bot. You tell me?

About Martin: Over the past 17 years I've worked with Chief Legal Officers, General Counsel, Compliance Professionals and ‘Big Law’ firms globally, to create and implement systems and processes that reduce the likelihood of failure during a crisis.

Randy Eady

Wave Maker in the Sea of Tranquility

4y

'. . .w/the advent of Quantum Computing & Quantum Networks (see my in-depth explanatory article here) we are in for a huge step-change in the next decade'.   Yep!

Nicholas Okeefe

Legal Ops | Compliance | Entity Management

5y

Great write up Martin! There can be many takes on this, and I hope you take mine as a “perfect world” scenario. Though you’ll see I make points that humans are incapable of that. Having said that, when it comes to AI, my opinion is humans should always be supervising decision input. When this function becomes unsupervised, we run a very narrow line where something very “obviously” or “silly” to humans can easily become the result from AI decision. We have to continuously oversee AI to make it smarter. If we raise AI similar to a child, we can control the outcome to the best of our abilities. When AI is constantly supervised - we avoid the risk of giving it enough power where it will inevitably hurt people. Just like humans, if you give them enough time, we will eventually hurt someone, because we have that flaw. By supervising AI we minimize that risk and the cost, Though I’m reluctant to believe there will ever be an unflawed system.

Matthew Golab

Director - Legal Informatics and R+D at Gilbert + Tobin

5y

Ahh, so now I have that song stuck playing in a loop in my head. Something incredibly worrying was the recent crashes of the Boeing 737 Max planes and although it's not all due to software and automated decision making (https://qz.com/1577986/the-boeing-737-max-crisis-goes-way-beyond-software/) it still plays a large part in the blame. Yes I've diverged from your point, however I absolutely share many of your points. It is very pleasing to see the recent moves in ethics and AI, such as the Stanford HAI: https://hai.stanford.edu/

To view or add a comment, sign in

Insights from the community

Explore topics