The profound implications of new technology demonstrate that regulations are needed
By Robyn Mak
HONG KONG (Reuters Breakingviews) - Artificial intelligence doesn't hate you, prominent researcher Eliezer Yudkowsky wrote, "nor does it love you, but you are made of atoms which it can use for something else". This sets the scene for Tom Chivers’ fascinating new book, which borrows its title from the quote, on why so-called superintelligence should be viewed as an existential threat potentially greater than nuclear weapons or climate change.
The "strange, irascible and brilliant" Yudkowsky is a central figure throughout the book. His early musings on the potential and dangers of artificial intelligence during the mid- to late-2000s gave birth to the Rationalist movement, a loose community dedicated to AI safety.
Chivers, a former science journalist with Buzzfeed and the Telegraph, offers a meticulously researched investigation into who the Rationalists are, and more importantly why they believe humanity is fast approaching an inflection point between "extinction and godhood".
Interviews with leading Rationalists pepper the book. They are not just a collective of online nerds, but include Silicon Valley executives, engineers, economists and other prominent academics. Yudkowsky is less cooperative: he "agreed only to answer technical questions, via email".
Nonetheless, Chivers delves into the concerns surrounding artificial general intelligence -- "a computer that can do all the mental tasks that we can". The risk is not that machines will achieve human-level intelligence soon, but that when they arrive at that point, they will quickly reach superintelligence, surpassing us in every field.
One example is AlphaGo, the program developed by the Alphabet-owned DeepMind to play the board game Go. It took roughly a year to make the leap from amateur level to beating the world's top player. An upgraded version, AlphaGo Zero, achieved the same feat within days of being switched on. The term singularity broadly refers to this process: "When intelligent systems start improving themselves fast enough, our usual ways of predicting the future... break down".
Optimists like SoftBank founder Masayoshi Son are doing everything to bring about the singularity, which they believe could be the greatest thing to ever happen to mankind. The Japanese company and its Saudi-backed Vision Fund are deploying tens of billions of dollars into developing robotics, semiconductors and autonomous driving.
Others, led by Yudkowsky and the Rationalists, are sounding the alarm. A survey of 150 or so AI researchers reckon on average that there's a 90 per cent chance that human-level intelligence will arrive by 2075. The same survey also revealed that 18 per cent of respondents believe that human extinction will follow.
To understand why so many are concerned, including the founders of DeepMind and tech entrepreneurs like Elon Musk, Chivers explores several thought experiments. One involves a machine programmed to make paper clips. Its end goal eventually becomes "a solar system in which every single atom has been turned into either paper clips, paper-clip-manufacturing machines, computers that think about how best to manufacture paper clips..." and so on.
The point is that even "thoroughly innocuous-seeming goals could be an existential threat". And it's extremely difficult to predict what can go wrong. Chivers raises real examples, including an AI tasked with winning an online Tic-Tac-Toe tournament. The program quickly learned to make moves billions of squares away from the board, forcing the opposing algorithm to crash and thereby winning the game by default. Another that was meant to replicate a set of text files as closely as possible ended up deleting those files. It then turned in blank sheets to get perfect scores.
At times, the book meanders into bizarre discussions. Chivers devotes an entire chapter on whether the Rationalists are a sex cult or not (he doesn't think so). The movement's different schisms, ties to the alt-right, and views on feminism are other diversions.
The book is also frustratingly light on what exactly is being done to mitigate the dangers of AI. To be fair, that's probably because the answer is not much. Chivers mentions developments in the field of AI safety by training engineers and programmers and funding research. Yudkowsky is currently a researcher in the Machine Intelligence Research Institute which he founded. Other organisations include Elon Musk's OpenAI, as well as the Centre for the Study of Existential Risk in Cambridge.
Companies like Google and SoftBank as well as governments are conspicuous by their absence. Yet even if an AI apocalypse is too far-fetched for policymakers, the profound implications of new technology in the military, surveillance and society demonstrate that regulations are needed. As Chivers shows, AI does not need to wipe out mankind to wreak havoc.