MIT Created a Psychopath AI to Demonstrate the Dangers of Bias
June 14, 2018 - 3 minutes readScientists at the Massachusetts Institute of Technology (MIT) decided to make an artificial intelligence (AI) algorithm into a psychopath. Why? For science!
Specifically, the AI is being used to demonstrate the dangers of introducing biases to AI.
You Are What You Eat
MIT researchers Manuel Cebrian, Iyad Rahwan, and Pinar Yanardag trained the algorithm by exposing it to the shadiest corners of social media site Reddit. Named Norman, after the famous character from the 1960 horror film Psycho, the AI was constantly fed morbid and macabre images scraped from the site’s most noxious subreddits (a forum dedicated to a specific topic on Reddit).
After this, the Boston-based developers tested the AI’s image captioning abilities with Rorschach inkblot tests. As its name implies, image captioning is a deep learning technique in which AI produces a description for an image. The goal of this experiment was to show that the data input method can greatly influence a machine learning algorithm’s behavior. Or as the researchers put it, “the culprit is often not the algorithm itself but the biased data that was fed into it.”
Violent Visions
The results of the experiment were disturbing, to put it mildly. Where normal AI would see benign objects like umbrellas or wedding cakes, Norman saw automobile fatalities and electrocutions. When a regular AI saw “a close up of a vase with flowers,” Norman observed “a man being shot dead.”
There was no end to the absurdity of Norman’s violent visions. Perhaps the most chilling insight that the experiment gave was its reflection on human empathy. Since Norman only ever saw negative imagery, the logic for its empathy was never turned on.
Empathy is actually a sought-after attribute to instill in robots. Recently, Aldebaran Robotics’ “Pepper” robot made the headline rounds because of its ability to perceive emotions from a person’s facial expressions. While many experts argue that we’re still a long way off from achieving such a feat, Norman may shine a light on how to accomplish it faster.
Striving for an Unbiased Perspective
At a recent event in New York, Andrew McAfee, the co-director of MIT’s Initiative on the Digital Economy delved into the topic of unbiased AI. He and the other panelists agreed that the only way to keep an AI unbiased is to ensure that its input data is accurate.
The AI developers involved with the experiment have redacted the specific subreddits that they used to train Norman. They did admit that Norman had “suffered from extended exposure to the darkest corners of Reddit,” but in the end, it demonstrated “the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.”
Will unbiased AI ever truly exist? It seems that there will always be a possibility of twisted AI being created. What do you think? Let us know in the comments!
Tags: AI, AI and machine learning, AI and ML, AI App Developer, AI app developers, AI App Development, AI applications, AI developers, Boston app developer, boston app developers, Boston iOS app developers, boston iphone app development, MIT