Friday, 17 August, 2018

Norman is a psychopathic AI obsessed with murder, thanks to Reddit

Norman AI MIT Norman Project Meet Norman the ‘psychopath’ AI trained on violent Reddit content
Cecil Davis | 10 June, 2018, 03:43

According to CNN, the objective behind Norman (named after Norman Bates from the Hitchcock film, Psycho) isn't to eventually destroy all of humanity, but rather, to teach a lesson about how the kinds of conclusions an AI can make depends greatly on the data it's given.

As the researchers explain: "Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior".

In order to test Norman's psychological status after his Reddit binge, the researchers used Rorschach inkblots, which they claim "is used to detect underlying thought disorders".

Amazon launches 4K Fire TV Cube with integrated Alexa Echo
The earlier Fire TV devices did let you summon Alexa , but for that a remote was required. On June 7 and 8, Amazon is letting Prime Members pre-order the device for $90.

The Massachusetts Institute of Technology have created their first AI which is Psychopathic and creepy at the same time and named it Norman, though the Ai is oddly optimistic & lively but at the same time is a creepy combination of "Norman Bates" inspired from 1960s Alfred Hitchcock Movie "Psycho" and a robot which stares at you constantly and dares you to "explore what Norman sees", find it out how dark and creepy Norman is here. As Newsweek reports, Norman then responded differently to the testing than the more standard AI, seeing gory auto deaths rather than every day appliances or things like umbrellas. Except in this case, Norman's entire training is based on data from a known sub-Reddit about death.

Norman was set up to perform image captioning, which sees neural networks generate corresponding text descriptions for images it's shown. The responses were compared to what an ordinary MSCOCO-trained AI returned, and you can see for yourself how twisted the AI is. "So when we talk about AI algorithms being biased or unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it". Due to ethical concerns, the team only exposed Norman to captions, not the actual death videos, but that didn't stop the bot from developing a deranged view of the world.

"Since Norman only observed horrifying image captions, it sees death in whatever image it looks at", the developers said. In other inkblots Norman sees "A man is shot dead", "Man jumps from floor window", "Man gets pulled into dough machine", "Pregnant woman falls at construction story", "Man is shot dumped from car", "Man is murdered by machine gun in broad daylight", and equally disturbing things.