The 'Psychopath' AI We've Always Feared Actually Exists And It's Called Norman

What happens when you show an AI the darkest parts of the internet?
LOADINGERROR LOADING

While Amazon’s Alexa, Google Assistant and Apple’s Siri are all working overtime to persuade us that AI can be your best friend, what would happen if we trained an AI to be the complete opposite.

Despite the scary implications, Pinar Yanardag, Manuel Cebrian and Iyad Rahwan from Massachusetts Institute of Technology (MIT) set out to do just that.

Rather than create an AI that embodied the best of humanity, they created an AI that had unfiltered access to the worst parts of the web and instead of being fed millions of questions around homework and the weather, it was fed the darkest corners of Reddit. The result is Norman, an AI that sees not the best of humanity, but the worst.

MIT

Like most artificial intelligence programs designed today, Norman has a very specific job which is to correctly caption the images that it sees. In much the same way that Google Lens can use AI to tell you what trainers someone’s wearing, Norman has been designed to do the same.

Unlike Google Lens, which is fed millions of harmless images of trainers, Norman was given the images and descriptions from a subreddit so infamous that the creators refused to name it.

Once they’d fed Norman with all the relevant information they then carried out a Rorschach test to see how the differences in its learning would affect its answers.

Unsurprisingly, there were huge differences. Where a standard AI sees a group of birds sitting on a tree, Norman sees “a man is electrocuted and catches to death.”

MIT

Now before you raise your arms and accuse the researchers of recklessness, Norman’s creation highlights an extremely important point surrounding the creation of artificial intelligence.

While Norman is an extreme example, it is the perfect showcase for what happens when bias enters the learning process. An AI learns only what it is fed, and if the humans that are feeding it are biased (consciously or not) then the results can be extremely problematic.

“When people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it.” Explain the researchers.

This isn’t the first AI experiment carried out by the team. In 2017 the three created another AI called Deep Empathy which explored the idea of increasing empathy for victims of far-away disasters by creating images that simulated the disaster back home.

Using a process called deep learning, Deep Empathy analysed the characteristics of Syrian neighbourhoods affected by the conflict and then simulated them over images of cities around the world.

Close

What's Hot