X
Innovation

Meet Norman, the world's first 'psychopathic' AI

While you see flowers, Norman sees gunfire.
Written by Charlie Osborne, Contributing Writer

Researchers at the Massachusetts Institute of Technology (MIT) have developed what is likely a world first -- a "psychopathic" artificial intelligence (AI).

The experiment is based on the 1921 Rorschach test, which identifies traits in humans deemed to be psychopathic based on their perception of inkblots, alongside what is known as thought disorders.

Norman is an AI experiment born from the test and "extended exposure to the darkest corners of Reddit," according to MIT, in order to explore how datasets and bias can influence the behavior and decision-making capabilities of artificial intelligence.

TechRepublic: Why human-AI collaboration will dominate the future of work

"When people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it," the researchers say. "The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set."

See also: MIT launches MIT IQ, aims to spur human, artificial intelligence breakthroughs, bolster collaboration

Norman is an AI system trained to perform image captioning, in which deep learning algorithms are used to generate a text description of an image.

However, after plundering the depths of Reddit and a select subreddit dedicated to graphic content brimming with images of death and destruction, Norman's datasets are far from what a standard AI would be exposed to.

In a prime example of artificial intelligence gone wrong, MIT performed the Rorschach inkblot tests on Norman, with a standard image captioning neural network used as a control subject for comparison.

The results are disturbing, to say the least.

In one inkblot test, a standard AI saw "a black and white photo of a red and white umbrella," while Norman saw "man gets electrocuted while attempting to cross busy street."

In another, the control AI described the inkblot as "a black and white photo of a small bird," Norman described the image as "man gets pulled into dough machine."

screen-shot-2018-06-07-at-10-12-06.jpg
MIT
screen-shot-2018-06-07-at-10-12-12.jpg
MIT

Due to ethical concerns, MIT only introduced bias in relation to image captions from the subreddit which are later matched with randomly generated inkblots. In other words, the researchers did not use true images of people dying during the experiment.

CNET: New AI ethics council in Singapore will give smart advice

The Norman experiment is an interesting application of AI which highlights the need for suitable datasets when artificial intelligence systems and neural networks are being trained.

Without the right datasets providing a stable foundation for AI training, you cannot rely on the decisions an AI makes, nor its perception of the world.

Innovative artificial intelligence, machine learning projects to watch

Previous and related coverage

Editorial standards