Why do we assume everyone can drive competently?

A reader of Emerging Technologies Blog and Mountain View resident writes to them about the Google self-driving cars they see on the road regularly. It's well worth a read.

Anyway, here we go: Other drivers don't even blink when they see one. Neither do pedestrians - there's no "fear" from the general public about crashing or getting run over, at least not as far as I can tell.

Google cars drive like your grandma - they're never the first off the line at a stop light, they don't accelerate quickly, they don't speed, and they never take any chances with lane changes (cut people off, etc.).
 

During the years I was commuting down to Palo Alto or Menlo Park from San Francisco, I often took the 280 instead of the 101. It wasn't necessarily faster, but it was more scenic and the traffic tended to move more quickly even if my route ended up longer. On the 280 I often saw the Google Lexus self-driving cars on the road. After the novelty of seeing them on the road wore off, I didn't think of them any differently than other cars on the road. If you had taken me from ten years ago and told me that I would be driving on the freeway alongside a car driven by a computer, I would've thought you were describing something from Blade Runner. Now we speak of the technology as an eventuality. Such is life in technology, where we speak with the certainty of Moore's Law. 

I cycle a lot in San Francisco, and this weekend on the way back from a ride across the Golden Gate Bridge, I was cruising in the bike lane along the Embarcadero towards SOMA next to deadlocked traffic. Without any warning or a turn signal, a car in the right lane suddenly cut into the bike lane in front of me, apparently deciding to horn in to wait for a curbside parking spot. Road bikes have terrible brakes, and regardless, I had no time to stop in time. I screamed reflexively, adrenaline spiking, and leaned my bike right, just barely missing the right front fender of the car, then leaned hard left and managed to angle my bike and body around the left rear fender of the parked car along the curb.

For the next two blocks, I played my near collision on loop in my head like a Vine, both angry at the driver's reckless maneuver and relieved as I tallied up the likely severity of the injuries I had just managed to escape by less than a foot of clearance. This is not an unusual occurrence, unfortunately. When I bike, I just assume that drivers will suddenly make rights in front of me without turning on their turn signal or looking back to see if I'm coming in the bike lane to their right. It happens all the time. It's not just a question of skill but of mental obliviousness. American drivers have been so used to having the road to themselves for so long that they feel no need to consider anyone else might be laying claim to any piece of it. Though the roads in Europe are often narrower, I feel a hundred times safer there when biking there than I do in the U.S.

All that's to say I agree wholeheartedly with the writer quoted above that self-driving cars are much less threatening than cars driven by humans. As an avid cyclist, especially, I could think of nothing that would ease my mind when biking through the city than replacing every car on the road with self-driving cars.

We have years and years of data on human ability to drive cars, and if we've learned anything it's that humans are lousy drivers. For some reason, I don't know the history behind this, we've assumed that just about every person is qualified to drive a car. I can't think of the last time I met someone who didn't pass their driving test, who didn't have a driver's license. I can think of plenty of times I've met people I'd be scared to see behind the wheel.

Humans drink and drive. Humans text on their phones when they should be looking at the road. Humans, especially Americans, where drivers feel a great sense of entitlement, exercise aggressive maneuvers in fits of road rage, or drive at unsafe racetrack speeds on public roads. They run red lights, cut other drivers off, tailgate, drag race, and so on. You regularly hear that cars kill more humans in the U.S. than, well, just about anything else for the better part of a century now. The less people drive, which can be caused by a rise in gasoline prices, the fewer people die.

Why do we assume driving is something everyone is not only capable of but skilled enough execute at a level that doesn't endanger others? I venture to guess that it's no easier to learn to to drive a car than it is to cut hair, but our test to get a driver's license is much easier, despite the fact that the worst a bad stylist can do is give you a bad haircut while the worst a lousy driver can do is kill other human beings. Perhaps our romance with the American road is so deep, our conception of American freedom so intimately associated with hopping in a car and taking ourselves anywhere, that the thirty to forty thousand fatal crashes each year are seen as an acceptable error rate.

Whatever the reason, and whether you agree or disagree, it is something unlikely to change in the U.S. unless something comes along to force a serious reevaluation of our assumptions around driving quality. That something just might be self-driving cars which will be held to a much higher standard of driving safety than we've held humans to all those years. You might say that self-driving cars will be held to the standard we should've held ourselves to since the beginning.

Autonomous cars may be so well-behaved that they need protection from more ruthless and unscrupulous bad actors on the road. That's right, I refer to those monsters we call humans.

It's safe to cut off a Google car. I ride a motorcycle to work and in California motorcycles are allowed to split lanes (i.e., drive in the gap between lanes of cars at a stoplight, slow traffic, etc.). Obviously I do this at every opportunity because it cuts my commute time in 1/3.

Once, I got a little caught out as the traffic transitioned from slow moving back to normal speed. I was in a lane between a Google car and some random truck and, partially out of experiment and partially out of impatience, I gunned it and cut off the Google car sort of harder than maybe I needed too... The car handled it perfectly (maybe too perfectly). It slowed down and let me in. However, it left a fairly significant gap between me and it. If I had been behind it, I probably would have found this gap excessive and the lengthy slowdown annoying. Honestly, I don't think it will take long for other drivers to realize that self-driving cars are "easy targets" in traffic.

Overall, I would say that I'm impressed with how these things operate. I actually do feel safer around a self-driving car than most other California drivers.

Moravec's Paradox and self-driving cars

Moravec's Paradox:

...the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources. The principle was articulated by Hans MoravecRodney BrooksMarvin Minsky and others in the 1980s. As Moravec writes, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility."[1]

Linguist and cognitive scientist Steven Pinker considers this the most significant discovery uncovered by AI researchers. In his book The Language Instinct, he writes:

The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived... As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.

I thought of Moravec's Paradox when reading two recent articles about Google's self-driving cars, both by Lee Gomes.

The first:

Among other unsolved problems, Google has yet to drive in snow, and Urmson says safety concerns preclude testing during heavy rains. Nor has it tackled big, open parking lots or multilevel garages. The car’s video cameras detect the color of a traffic light; Urmson said his team is still working to prevent them from being blinded when the sun is directly behind a light. Despite progress handling road crews, “I could construct a construction zone that could befuddle the car,” Urmson says.

Pedestrians are detected simply as moving, column-shaped blurs of pixels—meaning, Urmson agrees, that the car wouldn’t be able to spot a police officer at the side of the road frantically waving for traffic to stop.

The car’s sensors can’t tell if a road obstacle is a rock or a crumpled piece of paper, so the car will try to drive around either. Urmson also says the car can’t detect potholes or spot an uncovered manhole if it isn’t coned off.

More within on some of the engineering challenges still unsolved.

In his second piece, at Slate, Gomes notes something about self-driving cars that people often misunderstand about how they work (emphasis mine):

...the Google car was able to do so much more than its predecessors in large part because the company had the resources to do something no other robotic car research project ever could: develop an ingenious but extremely expensive mapping system. These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway.

That might not seem like such a tough job for the company that gave us Google Earth and Google Maps. But the maps necessary for the Google car are an order of magnitude more complicated. In fact, when I first wrote about the car for MIT Technology Review, Google admitted to me that the process it currently uses to make the maps are too inefficient to work in the country as a whole.

To create them, a dedicated vehicle outfitted with a bank of sensors first makes repeated passes scanning the roadway to be mapped. The data is then downloaded, with every square foot of the landscape pored over by both humans and computers to make sure that all-important real-world objects have been captured. This complete map gets loaded into the car's memory before a journey, and because it knows from the map about the location of many stationary objects, its computer—essentially a generic PC running Ubuntu Linux—can devote more of its energies to tracking moving objects, like other cars.

But the maps have problems, starting with the fact that the car can’t travel a single inch without one. Since maps are one of the engineering foundations of the Google car, before the company's vision for ubiquitous self-driving cars can be realized, all 4 million miles of U.S. public roads will be need to be mapped, plus driveways, off-road trails, and everywhere else you'd ever want to take the car. So far, only a few thousand miles of road have gotten the treatment, most of them around the company's headquarters in Mountain View, California.  The company frequently says that its car has driven more than 700,000 miles safely, but those are the same few thousand mapped miles, driven over and over again.

The common conception of self-driving cars is that they drive somewhat like humans do. That is, they look around at the road and make decisions on what their camera eyes “see.” I didn't realize that the cars were depending so heavily on pre-loaded maps.

I'm still excited about self-driving technology, but my expectations for the near to medium term have become much more modest. I once imagined I could just sit in the backseat of a car and have it drive me to any destination while I futzed around on my phone in the backseat. It's seems clear now that for the foreseeable future, someone always needs to be in the driver's seat, ready to take over at a moment's notice, and the millions of hours of additional leisure team that might be returned to society are not going to materialize this way. We so often throw around self-driving cars as if they're an inevitability, but it may be the last few problems to solve in that space are the most difficult ones to surmount. Thus Moravec's Paradox: it's “difficult or impossible to give [computers] the skills of a one-year-old when it comes to perception and mobility.”

What about an alternative approach, one that pairs humans with computers, a formula for many of the best solutions at this stage in history? What if the car could be driven by a remote driver, almost like a drone?

We live in a country where the government assumes anyone past the age of 16 who passes a driver test can drive for life. Having nearly been run over by many an elderly driver in Palo Alto during my commute to work, I'm not sure that's such a sound assumption. Still, if you are sitting in a car and don't have to drive it yourself, and as long as you get where you need to go, whether a computer does the driving or a remote pilot handles the wheel, you still get that time back.

The contour of money

Meanwhile, the bullet train has sucked the country’s workforce into Tokyo, rendering an increasingly huge part of the country little more than a bedroom community for the capital. One reason for this is a quirk of Japan’s famously paternalistic corporations: namely, employers pay their workers’ commuting costs. Tax authorities don’t consider it income if it’s less than ¥100,000 a month – so Shinkansen commutes of up to two hours don’t sound so bad. New housing subdivisions filled with Tokyo salarymen subsequently sprang up along the Nagano Shinkansen route and established Shinkansen lines, bringing more people from further away into the capital.

The Shinkansen’s focus on Tokyo, and the subsequent emphasis on profitability over service, has also accelerated flight from the countryside. It’s often easier to get from a regional capital to Tokyo than to the nearest neighbouring city. Except for sections of the Tohoku Shinkansen, which serves northeastern Japan, local train lines don’t always accommodate Shinkansen rolling stock, so there are often no direct transfer points between local lines and Shinkansen lines. The Tokaido Shinkansen alone now operates 323 trains a day, taking 140 million fares a year, dwarfing local lines. This has had a crucial effect on the physical shape of the city. As a result of this funnelling, Tokyo is becoming even denser and more vertical – not just upward, but downward. With more Shinkansen passengers coming into the capital, JR East has to dig ever deeper under Tokyo Station to create more platforms.

From The Guardian on the effects of the Shinkansen bullet train on Tokyo (h/t Marginal Revolution).

We often analyze architecture and urban layouts for their purely functional and aesthetic utility, but it's just as important to understand the interplay between money and geography. They shape each other.