Published: Sun, June 10, 2018
Industry | By Terrell Bush

Researchers Create A ‘Psychopath’ AI By Feeding It Reddit Captions

Researchers Create A ‘Psychopath’ AI By Feeding It Reddit Captions

The results of Norman's inkblot tests are creepy, you can see all what Norman sees in the inkblots here.

It's recommended, only due to their most recent project in artificial intelligence: Norman, the AI who's also a psychopath. It poses no harm to humans; instead it is the embodiment of an important ethical concern in the age of AI.

By the way, Norman was intentionally created that way, to prove that the data used to teach AI can significantly influence its behavior.

MIT used deep learning algorithms to train the program to caption images. "So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it". The AI is trained to perform image captioning, which means that it can scan any image presented to it, and provide a standard caption for it.

Scientists at MIT have revealed how they trained an artificial robot to become a "psychopath" by only showing it captions from disturbing images depicting gruesome deaths posted on Reddit.

There's A New Trailer For A Documentary About Robin Williams
The network describes it as, "A amusing , intimate and heartbreaking portrait of one of the world's most beloved and inventive comedians".

They then made Norman and a regular image captioning AI take a Rorschach inkblot tests, and compared their responses. The results were alarming. Where one AI saw a vase with flowers, Norman saw a man shot in front of his "screaming" wife. Another test saw the standard AI answer with, "a black and white photo of a small bird", while Norma responded with, "man gets pulled into dough machine".

Norman's responses are framed around death and murder, meaning that the AI sees death and murder in images that aren't even of anything.

Though this was not the first time when MIT made a decision to explore the dark side of an AI, in 2016 MIT created "Nightmare Machine" for AI- generated scary Imagery. Algorithms are sometimes blamed for tainting the internet, but the data is what matters.

At the end of the day, Norman's goal is to depict the effect that biased training data can have on the output given by AI on the test data.

As the team highlights, AI algorithms can see very different things in an image if trained on the wrong data set.

Like this: