Moravec's Paradox and self-driving cars

Moravec's Paradox:

...the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources. The principle was articulated by Hans MoravecRodney BrooksMarvin Minsky and others in the 1980s. As Moravec writes, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility."[1]

Linguist and cognitive scientist Steven Pinker considers this the most significant discovery uncovered by AI researchers. In his book The Language Instinct, he writes:

The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived... As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.

I thought of Moravec's Paradox when reading two recent articles about Google's self-driving cars, both by Lee Gomes.

The first:

Among other unsolved problems, Google has yet to drive in snow, and Urmson says safety concerns preclude testing during heavy rains. Nor has it tackled big, open parking lots or multilevel garages. The car’s video cameras detect the color of a traffic light; Urmson said his team is still working to prevent them from being blinded when the sun is directly behind a light. Despite progress handling road crews, “I could construct a construction zone that could befuddle the car,” Urmson says.

Pedestrians are detected simply as moving, column-shaped blurs of pixels—meaning, Urmson agrees, that the car wouldn’t be able to spot a police officer at the side of the road frantically waving for traffic to stop.

The car’s sensors can’t tell if a road obstacle is a rock or a crumpled piece of paper, so the car will try to drive around either. Urmson also says the car can’t detect potholes or spot an uncovered manhole if it isn’t coned off.

More within on some of the engineering challenges still unsolved.

In his second piece, at Slate, Gomes notes something about self-driving cars that people often misunderstand about how they work (emphasis mine):

...the Google car was able to do so much more than its predecessors in large part because the company had the resources to do something no other robotic car research project ever could: develop an ingenious but extremely expensive mapping system. These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway.

That might not seem like such a tough job for the company that gave us Google Earth and Google Maps. But the maps necessary for the Google car are an order of magnitude more complicated. In fact, when I first wrote about the car for MIT Technology Review, Google admitted to me that the process it currently uses to make the maps are too inefficient to work in the country as a whole.

To create them, a dedicated vehicle outfitted with a bank of sensors first makes repeated passes scanning the roadway to be mapped. The data is then downloaded, with every square foot of the landscape pored over by both humans and computers to make sure that all-important real-world objects have been captured. This complete map gets loaded into the car's memory before a journey, and because it knows from the map about the location of many stationary objects, its computer—essentially a generic PC running Ubuntu Linux—can devote more of its energies to tracking moving objects, like other cars.

But the maps have problems, starting with the fact that the car can’t travel a single inch without one. Since maps are one of the engineering foundations of the Google car, before the company's vision for ubiquitous self-driving cars can be realized, all 4 million miles of U.S. public roads will be need to be mapped, plus driveways, off-road trails, and everywhere else you'd ever want to take the car. So far, only a few thousand miles of road have gotten the treatment, most of them around the company's headquarters in Mountain View, California.  The company frequently says that its car has driven more than 700,000 miles safely, but those are the same few thousand mapped miles, driven over and over again.

The common conception of self-driving cars is that they drive somewhat like humans do. That is, they look around at the road and make decisions on what their camera eyes “see.” I didn't realize that the cars were depending so heavily on pre-loaded maps.

I'm still excited about self-driving technology, but my expectations for the near to medium term have become much more modest. I once imagined I could just sit in the backseat of a car and have it drive me to any destination while I futzed around on my phone in the backseat. It's seems clear now that for the foreseeable future, someone always needs to be in the driver's seat, ready to take over at a moment's notice, and the millions of hours of additional leisure team that might be returned to society are not going to materialize this way. We so often throw around self-driving cars as if they're an inevitability, but it may be the last few problems to solve in that space are the most difficult ones to surmount. Thus Moravec's Paradox: it's “difficult or impossible to give [computers] the skills of a one-year-old when it comes to perception and mobility.”

What about an alternative approach, one that pairs humans with computers, a formula for many of the best solutions at this stage in history? What if the car could be driven by a remote driver, almost like a drone?

We live in a country where the government assumes anyone past the age of 16 who passes a driver test can drive for life. Having nearly been run over by many an elderly driver in Palo Alto during my commute to work, I'm not sure that's such a sound assumption. Still, if you are sitting in a car and don't have to drive it yourself, and as long as you get where you need to go, whether a computer does the driving or a remote pilot handles the wheel, you still get that time back.