Human-level AI always 25 years away

Rodney Brooks says recent fears about malevolent, superintelligent AI are not warranted.

I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.  Recent advances in deep machine learning let us teach our machines things like how to distinguish classes of inputs and to fit curves to time data.  This lets our machines “know” whether an image is that of a cat or not, or to “know” what is about to fail as the temperature increases in a particular sensor inside a jet engine.  But this is only part of being intelligent, and Moore’s Law applied to this very real technical advance will not by itself bring about human level or super human level intelligence.  While deep learning may come up with a category of things appearing in videos that correlates with cats, it doesn’t help very much at all in “knowing” what catness is, as distinct from dogness, nor that those concepts are much more similar to each other than to salamanderness.  And deep learning does not help in giving a machine “intent”, or any overarching goals or “wants”.  And it doesn’t help a machine explain how it is that it “knows” something, or what the implications of the knowledge are, or when that knowledge might be applicable, or counterfactually what would be the consequences of that knowledge being false.  Malevolent AI would need all these capabilities, and then some.  Both an intent to do something and an understanding of human goals, motivations, and behaviors would be keys to being evil towards humans.

...

And, there is a further category error that we may be making here.  That is the intellectual shortcut that says computation and brains are the same thing.  Maybe, but perhaps not.

In the 1930′s Turing was inspired by how “human computers”, the people who did computations for physicists and ballistics experts alike, followed simple sets of rules while calculating to produce the first models of abstract computation. In the 1940′s McCullough and Pitts at MIT used what was known about neurons and their axons and dendrites to come up with models of how computation could be implemented in hardware, with very, very abstract models of those neurons.  Brains were the metaphors used to figure out how to do computation.  Over the last 65 years those models have now gotten flipped around and people use computers as the  metaphor for brains.  So much so that enormous resources are being devoted to “whole brain simulations”.  I say show me a simulation of the brain of a simple worm that produces all its behaviors, and then I might start to believe that jumping to the big kahuna of simulating the cerebral cortex of a human has any chance at all of being successful in the next 50 years.  And then only if we are extremely lucky.

In order for there to be a successful volitional AI, especially one that could be successfully malevolent, it would need a direct understanding of the world, it would need to have the dexterous hands and/or other tools that could out manipulate people, and to have a deep understanding of humans in order to outwit them. Each of these requires much harder innovations than a winged vehicle landing on a tree branch.  It is going to take a lot of deep thought and hard work from thousands of scientists and engineers.  And, most likely, centuries.

As Brooks notes, a study of 95 predictions made since 1950 of when human-level AI will be achieved have all predicted that its arrival in 15-25 years. A quarter century seems like the default time frame of choice for humans for predictions of technological advances that are plausible but not imminent.