

In short, the methods set out in this paper could one day save the world, or at least humanity, from an AI apocalypse. "If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions-harmful either for the agent or for the environment-and lead the agent into a safer situation." "Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences," the paper states. The paper outlines a framework for preventing advanced machines from ignoring turn-off commands and becoming an out-of-control rogue agent. It is researchers from Google DeepMind that have now put forward the idea of an off switch, in a peer-reviewed paper titled Safely Interruptible Agents. In 2014, Google acquired the London-based startup DeepMind for $500 million, which has since gone on to be Google's AI flag bearer, making headlines for its creation of the first computer capable of beating a human champion at the boardgame Go. Google has invested heavily and worked hard to assemble some of the brightest minds in AI research, employing them to develop everything from self-driving cars, to improved search algorithms.

Speaking at the Code Conference in California last week, Musk refused to explicitly mention Google by name, but alluded heavily to the fact that it was the "only one" he was worried about. The way to stop this from happening, according to Google, is a "big red button."įittingly, this solution comes from the same company that Musk views as the biggest threat to humanity in terms of AI. But so could a super intelligent agent and it would be much better at that than we are." We can anticipate threats and plan around them.

"The reason is that we are an intelligent adversary. "They certainly had reasons," Bostrom said. Speaking at a TED (technology, entertainment and design) conference conference last year, Bostrom hypothesized why neanderthals hadn't "flicked the off switch" with humans when we became the dominant species. Ultimately, the main concern is that the first machine to surpass human capabilities will be impossible to switch off. Potentially more dangerous than nukes.- Elon Musk August 3, 2014 Worth reading Superintelligence by Bostrom.
