Skip to main contentSkip to navigationSkip to navigation
Science fiction, still: Hal in 2001: A Space Odyssey.
Science fiction, still: Hal in 2001: A Space Odyssey. Photograph: Allstar/MGM
Science fiction, still: Hal in 2001: A Space Odyssey. Photograph: Allstar/MGM

Magical thinking about machine learning won’t bring the reality of AI any closer

This article is more than 5 years old
John Naughton

Unchecked flaws in algorithms, and even the technology itself, should put a brake on the escalating use of big data

”Any sufficiently advanced technology,” wrote the sci-fi eminence grise Arthur C Clarke, “is indistinguishable from magic.” This quotation, endlessly recycled by tech boosters, is possibly the most pernicious utterance Clarke ever made because it encourages hypnotised wonderment and disables our critical faculties. For if something is “magic” then by definition it is inexplicable. There’s no point in asking questions about it; just accept it for what it is, lie back and suspend disbelief.

Currently, the technology that most attracts magical thinking is artificial intelligence (AI). Enthusiasts portray it as the most important thing since the invention of the wheel. Pessimists view it as an existential threat to humanity: the first “superintelligent” machine we build will be the beginning of the end for humankind; the only question thereafter will be whether smart machines will keep us as pets.

In both cases there seems to be an inverse correlation between the intensity of people’s convictions about AI and their actual knowledge of the technology. The experts seem calmly sanguine, while the boosters seem blissfully unaware that the artificial “intelligence” they extol is actually a relatively mundane combination of machine learning (ML) plus big data.

ML uses statistical techniques to give computers the ability to “learn” – ie use data to progressively improve performance on a specific task, without being explicitly programmed. A machine-learning system is a bundle of algorithms that take in torrents of data at one end and spit out inferences, correlations, recommendations and possibly even decisions at the other end. And the technology is already ubiquitous: virtually every interaction we have with Google, Amazon, Facebook, Netflix, Spotify et al is mediated by machine-learning systems. It’s even got to the point where one prominent AI guru, Andrew Ng, likens ML to electricity.

To many corporate executives, a machine that can learn more about their customers than they ever knew seems magical. Think, for example, of the moment Walmart discovered that among the things their US customers stocked up on before a hurricane warning – apart from the usual stuff – were beer and strawberry Pop-Tarts! Inevitably, corporate enthusiasm for the magical technology soon spread beyond supermarket stock-controllers to public authorities. Machine learning rapidly found its way into traffic forecasting, “predictive” policing (in which ML highlights areas where crime is “more likely”), decisions about prisoner parole, and so on. Among the rationales for this feeding frenzy are increased efficiency, better policing, more “objective” decision-making and, of course, providing more responsive public services.

This “mission creep” has not gone unnoticed. Critics have pointed out that the old computing adage “garbage in, garbage out” also applies to ML. If the data from which a machine “learns” is biased, then the outputs will reflect those biases. And this could become generalised: we may have created a technology that – however good it is at recommending films you might like – may actually morph into a powerful amplifier of social, economic and cultural inequalities.

In all of this sociopolitical criticism of ML, however, what has gone unchallenged is the idea that the technology itself is technically sound – in other words that any problematic outcomes it produces are, ultimately, down to flaws in the input data. But now it turns out that this comforting assumption may also be questionable. At the most recent Nips (Neural Information Processing Systems) conference – the huge annual gathering of ML experts – Ali Rahimi, one of the field’s acknowledged stars, lobbed an intellectual grenade into the audience. In a remarkable lecture he likened ML to medieval alchemy. Both fields worked to a certain extent – alchemists discovered metallurgy and glass-making; ML researchers have built machines that can beat human Go champions and identify objects from pictures. But just as alchemy lacked a scientific basis, so, argued Rahimi, does ML. Researchers, he claimed, often can’t explain the inner workings of their mathematical models: they lack rigorous theoretical understandings of their tools and in that sense are currently operating in alchemical rather than scientific mode.

Does this matter? Emphatically yes. As Rahimi puts it: “We are building systems that govern healthcare and mediate our civic dialogue. We would influence elections. I would like to live in a society whose systems are built on top of verifiable, rigorous, thorough knowledge, and not on alchemy.”

Me too. We built what we like to call a civilisation on electricity. But at least we understood why and how it worked. If Rahimi is right, then we’re nowhere near that with AI – yet. So let’s take a break from magical thinking about it.

What I’m reading

Peter Thiel speaks
The billionaire entrepreneur may be a pain, but he’s an interesting one – this interview in Swiss magazine Die Weltwoche shows why.

Share, comment
Though under the cosh of public opinion at the moment, Facebook is, by any corporate criterion, a remarkably strong company. Ben Thompson explains why in an uncomfortable but persuasive essay.

Goop dreams
Know nothing about Gwyneth Paltrow and her Goop company? Me too. But then I stumbled on Taffy Brodesser-Akner’s long New York Times article and was hooked: not on Paltrow; on the essay.

Most viewed

Most viewed