UK Launches AI-Focused ‘Centre for Data Ethics + Innovation’

The UK has taken an important lead in creating the right regulatory and ethical landscape for AI technology to flourish by formally launching the Centre for Data Ethics and Innovation (CDEI). The move will likely be both of interest to lawyers who see new types of advisory work stemming from this area, as well as AI companies seeking to work with common ethical standards for data use.

The Government said that the centre, which will be an officially backed and taxpayer-funded entity, ‘will identify the measures needed to strengthen and improve the way data and AI are used…[by] promoting best practice and advising on how Government should address potential gaps in our regulatory landscape.’

‘The Centre will play a pioneering role in shaping how data and AI are used, now and in the future, and ensure that data and AI-driven innovations deliver maximum benefits for society,’ a spokesperson added.

The CDEI will be chaired by Roger Taylor, founder of the healthcare data firm Dr Foster (see below for full list of the centre’s board.)

This is another positive step by the UK Government to help create the right conditions for a healthy, ethical and fast-growing AI sector. Last week Artificial Lawyer covered the launch of a new and groundbreaking data trusts initiative that will also help to support the AI sector.

The moves are in parallel with work being conducted by the Law Society, by its ethics commission on AI and algorithms in the justice sector, headed by current President, Christina Blacklaws, as well as work by the recently created LawTech Delivery Panel, also backed by the Law Society, as well as the Ministry of Justice, which is looking at technology, the law and ethics, among other legal tech subjects. The Solicitors Regulation Authority (SRA) is also working to develop a legal AI and tech regulatory sandbox to support this wider movement in the legal world as well.

Overall, it’s fair to say that the UK is fully getting behind the AI industry, in part because the Government believes it will be an important driver for the economy. The UK is already seen as an important centre for AI research and development, with many top AI companies based here, or with a significant client base here already. But the Government, Law Society and others, are also keen to ensure that the right regulatory framework is in place as well.

In the long run, it’s likely that the economies that see the most healthy and sustainable growth in AI tech will be the ones with both the right scientific/research capabilities AND the right regulatory landscape.

As has been seen with the growth of the City of London’s global importance to the world of finance and insurance, one reason this has happened is because investors and stakeholders believe the UK provides a secure legal environment with a transparent regulatory landscape.

The recent debacle related to Facebook and Cambridge Analytica, with the huge legal and reputational fallout that followed (and is still developing…), is proof that tech without a credible regulatory framework leads to damaging incidents, for the businesses involved, the investors, and for public trust in the technology.

Hence the CDEI’s work is most welcome in the AI sector, and in the legal AI space as well.

Some of the key goals of the centre are:

The Centre’s role and objectives

  • The Centre will have a key role to play in strengthening our governance frameworks around data and data-driven technologies.
  • The Centre will play a role in convening, connecting and building on the work of existing institutions, and acting as the authoritative source of advice to government on the governance of data and AI.
  • The Centre should publish a strategic vision for how it proposes to operate with other organisations.
  • The Centre should seek to support both ethics and innovation as mutually supporting objectives.
  • The Centre will support the development of stronger ethical guidelines to provide the clarity and confidence that is needed to drive the growth of responsible innovation.
  • The Centre should also be mindful of potential conflicts of interest, and as such should look to engage different views to identify solutions that will deliver the greatest possible benefits to society as a whole.

The Centre’s activities and outputs

  • The Centre has a critical role to play in facilitating, shaping and informing public debates about how to use and regulate data and AI.
  • It is important to reiterate that the Centre is being established on an interim footing. This will allow the Government time to test the value and utility of the Centre’s functions ahead of the creation of a future statutory advisory body, and to identify how these might need to be expanded or adapted going forward.
  • The Centre will need to carefully prioritise its activities to ensure it is delivering the greatest possible value, with prioritisation reflecting the valuerationale and importance of projects.
  • The Centre has been commissioned to study the use of data in shaping people’s online experiences, and the potential for bias in decisions made using algorithms.
  • The chair of the Centre must agree its annual work programme with the Secretary of State for Digital, Culture, Media and Sport.

Here’s a short video about what the CDEI will be doing.

If you would like to get in contact with this new AI and data-focused Government agency then you can email them at: cdei@culture.gov.uk

The Centre’s board members have also been announced as:

  • Edwina Dunn, CEO of StarCount; founder and former CEO of Dunnhumby. Founder of the Female Lead
  • Professor Luciano Floridi, Professor of Philosophy and Ethics of Information at Oxford University. Director of the Digital Ethics Lab, Oxford Internet Institute, Chair of The Alan Turing Institute’s Data Ethics Group
  • Dame Patricia Hodgson, former Chair of Ofcom, Non-Executive Member of the Competition Commission
  • Dr Susan Liautaud, Public Policy School at Stanford University; Vice-Chair of Court of Governors, LSE; Founder of the Ethics Incubator
  • Baroness (Kate) Rock, Member of the House of Lords Select Committee on AI; Non-Executive Director of Keller Group
  • Rt Revd Dr Steven Croft, Bishop of Oxford; Member of the Lords Select Committee on Artificial Intelligence
  • Richard Sargeant, Chief Commercial Officer, ASI Data Science. Co-founder of Engineers Without Borders UK
  • Kriti Sharma, VP Artificial Intelligence at Sage Group; Forbes 30 Under 30 in Technology (2017); United Nations Young Leader in 2018
  • Dame Glenys Stacey (appointment February 2019), Her Majesty’s Chief Inspector of Probation. Chair, Farm Inspection and Regulation Review, former CEO of Ofqual
  • Dr Adrian Weller, Senior Fellow in Machine Learning Cambridge University, Programme Director for AI at the Alan Turing Institute
  • Professor Lord (Robert) Winston, Professor of Science and Society at Imperial College London. Chairman of the Genesis Research Trust.