War for the roads

Drawing on these arguments about power, precedence, and morality—and, also, through sheer numbers—pedestrians, drivers, and bicyclists all make strong claims to the streets. And yet the picture is even more complex, because almost no one is exclusively a walker, a cyclist, or a driver. We shift from role to role, and with those changes comes a shift in our vantage point.

There is, therefore, another, and perhaps more fundamental, source for our sense of vehicular entitlement: egocentricity. We all experience the world from our own point of view, and find it exceedingly difficult to move away from that selfish anchor. (Psychologists call this our egocentric bias.) Who we are colors what and how we see, and who we are changes depending on our mode of transportation. When we walk, we’re pedestrians. When we’re in a car, we’re drivers. When we bike, we’re cyclists. And whoever we are at the moment, we feel that we are deserving of priority.

When it comes to in-the-moment judgment, we don’t think abstractly, in terms of rules or laws or even common sense. We think concretely, in terms of our own personal needs at that very moment. It takes a big, effortful leap to tear ourselves out of that mode and accept someone else’s argument—and it’s an effort we don’t often make unless we’re specifically prompted to do so. And so, in some sense, it doesn’t matter who came first, or who’s the most powerful, or who’s best for the environment, or what the rules might say. What matters is what we, personally, happen to be doing. It’s hard to remind ourselves that we all play interchangeable roles within the urban landscape. In the end, it’s the role we’re in right now that matters. The never-ending war between bicyclists, drivers, and pedestrians reflects a basic, and often wrong, mental shortcut, upon which we all too often rely: Who is in the right? I am.


Maria Konnikova speaks the truth on the battle for our streets and sidewalks among cars, bicycles, and pedestrians.

Nowadays I spend about equal time as a driver, pedestrian, and cyclist, and the only conclusion I feel confident drawing is that everyone is wrong sometimes. Some drivers are terrifying, some cyclists are obnoxious, and many pedestrians are oblivious and inconsiderate.

Physics renders a car more dangerous than a bike which in turn is more dangerous than a pedestrian. All things being equal, I'm more terrified of road rage than obnoxious cyclists, and I'm more upset at reckless bike messengers than careless pedestrians. I'm more than ready for the age of the self-driving car because the combination of humans, with their emotional volatility and egocentricity, and a several thousand pound hunk of metal and glass is, when in motion, a movable instrument of death.

Why do we assume everyone can drive competently?

A reader of Emerging Technologies Blog and Mountain View resident writes to them about the Google self-driving cars they see on the road regularly. It's well worth a read.

Anyway, here we go: Other drivers don't even blink when they see one. Neither do pedestrians - there's no "fear" from the general public about crashing or getting run over, at least not as far as I can tell.

Google cars drive like your grandma - they're never the first off the line at a stop light, they don't accelerate quickly, they don't speed, and they never take any chances with lane changes (cut people off, etc.).
 

During the years I was commuting down to Palo Alto or Menlo Park from San Francisco, I often took the 280 instead of the 101. It wasn't necessarily faster, but it was more scenic and the traffic tended to move more quickly even if my route ended up longer. On the 280 I often saw the Google Lexus self-driving cars on the road. After the novelty of seeing them on the road wore off, I didn't think of them any differently than other cars on the road. If you had taken me from ten years ago and told me that I would be driving on the freeway alongside a car driven by a computer, I would've thought you were describing something from Blade Runner. Now we speak of the technology as an eventuality. Such is life in technology, where we speak with the certainty of Moore's Law. 

I cycle a lot in San Francisco, and this weekend on the way back from a ride across the Golden Gate Bridge, I was cruising in the bike lane along the Embarcadero towards SOMA next to deadlocked traffic. Without any warning or a turn signal, a car in the right lane suddenly cut into the bike lane in front of me, apparently deciding to horn in to wait for a curbside parking spot. Road bikes have terrible brakes, and regardless, I had no time to stop in time. I screamed reflexively, adrenaline spiking, and leaned my bike right, just barely missing the right front fender of the car, then leaned hard left and managed to angle my bike and body around the left rear fender of the parked car along the curb.

For the next two blocks, I played my near collision on loop in my head like a Vine, both angry at the driver's reckless maneuver and relieved as I tallied up the likely severity of the injuries I had just managed to escape by less than a foot of clearance. This is not an unusual occurrence, unfortunately. When I bike, I just assume that drivers will suddenly make rights in front of me without turning on their turn signal or looking back to see if I'm coming in the bike lane to their right. It happens all the time. It's not just a question of skill but of mental obliviousness. American drivers have been so used to having the road to themselves for so long that they feel no need to consider anyone else might be laying claim to any piece of it. Though the roads in Europe are often narrower, I feel a hundred times safer there when biking there than I do in the U.S.

All that's to say I agree wholeheartedly with the writer quoted above that self-driving cars are much less threatening than cars driven by humans. As an avid cyclist, especially, I could think of nothing that would ease my mind when biking through the city than replacing every car on the road with self-driving cars.

We have years and years of data on human ability to drive cars, and if we've learned anything it's that humans are lousy drivers. For some reason, I don't know the history behind this, we've assumed that just about every person is qualified to drive a car. I can't think of the last time I met someone who didn't pass their driving test, who didn't have a driver's license. I can think of plenty of times I've met people I'd be scared to see behind the wheel.

Humans drink and drive. Humans text on their phones when they should be looking at the road. Humans, especially Americans, where drivers feel a great sense of entitlement, exercise aggressive maneuvers in fits of road rage, or drive at unsafe racetrack speeds on public roads. They run red lights, cut other drivers off, tailgate, drag race, and so on. You regularly hear that cars kill more humans in the U.S. than, well, just about anything else for the better part of a century now. The less people drive, which can be caused by a rise in gasoline prices, the fewer people die.

Why do we assume driving is something everyone is not only capable of but skilled enough execute at a level that doesn't endanger others? I venture to guess that it's no easier to learn to to drive a car than it is to cut hair, but our test to get a driver's license is much easier, despite the fact that the worst a bad stylist can do is give you a bad haircut while the worst a lousy driver can do is kill other human beings. Perhaps our romance with the American road is so deep, our conception of American freedom so intimately associated with hopping in a car and taking ourselves anywhere, that the thirty to forty thousand fatal crashes each year are seen as an acceptable error rate.

Whatever the reason, and whether you agree or disagree, it is something unlikely to change in the U.S. unless something comes along to force a serious reevaluation of our assumptions around driving quality. That something just might be self-driving cars which will be held to a much higher standard of driving safety than we've held humans to all those years. You might say that self-driving cars will be held to the standard we should've held ourselves to since the beginning.

Autonomous cars may be so well-behaved that they need protection from more ruthless and unscrupulous bad actors on the road. That's right, I refer to those monsters we call humans.

It's safe to cut off a Google car. I ride a motorcycle to work and in California motorcycles are allowed to split lanes (i.e., drive in the gap between lanes of cars at a stoplight, slow traffic, etc.). Obviously I do this at every opportunity because it cuts my commute time in 1/3.

Once, I got a little caught out as the traffic transitioned from slow moving back to normal speed. I was in a lane between a Google car and some random truck and, partially out of experiment and partially out of impatience, I gunned it and cut off the Google car sort of harder than maybe I needed too... The car handled it perfectly (maybe too perfectly). It slowed down and let me in. However, it left a fairly significant gap between me and it. If I had been behind it, I probably would have found this gap excessive and the lengthy slowdown annoying. Honestly, I don't think it will take long for other drivers to realize that self-driving cars are "easy targets" in traffic.

Overall, I would say that I'm impressed with how these things operate. I actually do feel safer around a self-driving car than most other California drivers.

Moravec's Paradox and self-driving cars

Moravec's Paradox:

...the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources. The principle was articulated by Hans MoravecRodney BrooksMarvin Minsky and others in the 1980s. As Moravec writes, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility."[1]

Linguist and cognitive scientist Steven Pinker considers this the most significant discovery uncovered by AI researchers. In his book The Language Instinct, he writes:

The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived... As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.

I thought of Moravec's Paradox when reading two recent articles about Google's self-driving cars, both by Lee Gomes.

The first:

Among other unsolved problems, Google has yet to drive in snow, and Urmson says safety concerns preclude testing during heavy rains. Nor has it tackled big, open parking lots or multilevel garages. The car’s video cameras detect the color of a traffic light; Urmson said his team is still working to prevent them from being blinded when the sun is directly behind a light. Despite progress handling road crews, “I could construct a construction zone that could befuddle the car,” Urmson says.

Pedestrians are detected simply as moving, column-shaped blurs of pixels—meaning, Urmson agrees, that the car wouldn’t be able to spot a police officer at the side of the road frantically waving for traffic to stop.

The car’s sensors can’t tell if a road obstacle is a rock or a crumpled piece of paper, so the car will try to drive around either. Urmson also says the car can’t detect potholes or spot an uncovered manhole if it isn’t coned off.

More within on some of the engineering challenges still unsolved.

In his second piece, at Slate, Gomes notes something about self-driving cars that people often misunderstand about how they work (emphasis mine):

...the Google car was able to do so much more than its predecessors in large part because the company had the resources to do something no other robotic car research project ever could: develop an ingenious but extremely expensive mapping system. These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway.

That might not seem like such a tough job for the company that gave us Google Earth and Google Maps. But the maps necessary for the Google car are an order of magnitude more complicated. In fact, when I first wrote about the car for MIT Technology Review, Google admitted to me that the process it currently uses to make the maps are too inefficient to work in the country as a whole.

To create them, a dedicated vehicle outfitted with a bank of sensors first makes repeated passes scanning the roadway to be mapped. The data is then downloaded, with every square foot of the landscape pored over by both humans and computers to make sure that all-important real-world objects have been captured. This complete map gets loaded into the car's memory before a journey, and because it knows from the map about the location of many stationary objects, its computer—essentially a generic PC running Ubuntu Linux—can devote more of its energies to tracking moving objects, like other cars.

But the maps have problems, starting with the fact that the car can’t travel a single inch without one. Since maps are one of the engineering foundations of the Google car, before the company's vision for ubiquitous self-driving cars can be realized, all 4 million miles of U.S. public roads will be need to be mapped, plus driveways, off-road trails, and everywhere else you'd ever want to take the car. So far, only a few thousand miles of road have gotten the treatment, most of them around the company's headquarters in Mountain View, California.  The company frequently says that its car has driven more than 700,000 miles safely, but those are the same few thousand mapped miles, driven over and over again.

The common conception of self-driving cars is that they drive somewhat like humans do. That is, they look around at the road and make decisions on what their camera eyes “see.” I didn't realize that the cars were depending so heavily on pre-loaded maps.

I'm still excited about self-driving technology, but my expectations for the near to medium term have become much more modest. I once imagined I could just sit in the backseat of a car and have it drive me to any destination while I futzed around on my phone in the backseat. It's seems clear now that for the foreseeable future, someone always needs to be in the driver's seat, ready to take over at a moment's notice, and the millions of hours of additional leisure team that might be returned to society are not going to materialize this way. We so often throw around self-driving cars as if they're an inevitability, but it may be the last few problems to solve in that space are the most difficult ones to surmount. Thus Moravec's Paradox: it's “difficult or impossible to give [computers] the skills of a one-year-old when it comes to perception and mobility.”

What about an alternative approach, one that pairs humans with computers, a formula for many of the best solutions at this stage in history? What if the car could be driven by a remote driver, almost like a drone?

We live in a country where the government assumes anyone past the age of 16 who passes a driver test can drive for life. Having nearly been run over by many an elderly driver in Palo Alto during my commute to work, I'm not sure that's such a sound assumption. Still, if you are sitting in a car and don't have to drive it yourself, and as long as you get where you need to go, whether a computer does the driving or a remote pilot handles the wheel, you still get that time back.

The trolley problem and self-driving cars

The trolley problem is a famous thought experiment in philosophy.

You are walking near a trolley-car track when you notice five people tied to it in a row. The next instant, you see a trolley hurtling toward them, out of control. A signal lever is within your reach; if you pull it, you can divert the runaway trolley down a side track, saving the five — but killing another person, who is tied to that spur. What do you do? Most people say they would pull the lever: Better that one person should die instead of five.
 
Now, a different scenario. You are on a footbridge overlooking the track, where five people are tied down and the trolley is rushing toward them. There is no spur this time, but near you on the bridge is a chubby man. If you heave him over the side, he will fall on the track and his bulk will stop the trolley. He will die in the process. What do you do? (We presume your own body is too svelte to stop the trolley, should you be considering noble self-sacrifice.)

In numerical terms, the two situations are identical. A strict utilitarian, concerned only with the greatest happiness of the greatest number, would see no difference: In each case, one person dies to save five. Yet people seem to feel differently about the “Fat Man” case. The thought of seizing a random bystander, ignoring his screams, wrestling him to the railing and tumbling him over is too much. Surveys suggest that up to 90 percent of us would throw the lever in “Spur,” while a similar percentage think the Fat Man should not be thrown off the bridge. Yet, if asked, people find it hard to give logical reasons for this choice. Assaulting the Fat Man just feels wrong; our instincts cry out against it.

Nothing intrigues philosophers more than a phenomenon that seems simultaneously self-evident and inexplicable. Thus, ever since the moral philosopher Philippa Foot set out Spur as a thought experiment in 1967, a whole enterprise of “trolley­ology” has unfolded, with trolleyologists generating ever more fiendish variants.
 

There are entire books devoted entirely to the subject, including the humorously titled The Trolley Problem, or Would You Throw the Fat Guy Off the Bridge: A Philosophical Conundrum or the similarly named Would You Kill the Fat Man?: The Trolley Problem and What Your Answer Tells Us about Right and Wrong. If the obese don't have enough problems, they also stumble into philosophical quandaries merely by walking across bridges at inopportune moments.

In the abstract, the trolley problem can seem frivolous. In the real world, however, such dilemmas can prove very real and complex. Just around the corner lurks a technological breakthrough which will force us to confront the trolley problem once again: the self-driving car.

Say you're sitting by yourself in your self-driving car, just playing aimlessly on your phone while your car handles the driving duties, when suddenly a mother and child step out in front of the car from between two parked cars on the side of the road. The self-driving car doesn't have enough time to brake, and if it swerves to avoid the mother and child, the car will fly off a bridge and throw you to certain death. What should the car's driving software be programmed to do in that situation?

That problem is the subject of an article in Aeon on automated ethics.

A similar computer program to the one driving our first tram would have no problem resolving this. Indeed, it would see no distinction between the cases. Where there are no alternatives, one life should be sacrificed to save five; two lives to save three; and so on. The fat man should always die – a form of ethical reasoning called consequentialism, meaning conduct should be judged in terms of its consequences.

When presented with Thomson’s trolley problem, however, many people feel that it would be wrong to push the fat man to his death. Premeditated murder is inherently wrong, they argue, no matter what its results – a form of ethical reasoning called deontology, meaning conduct should be judged by the nature of an action rather than by its consequences.

The friction between deontology and consequentialism is at the heart of every version of the trolley problem. Yet perhaps the problem’s most unsettling implication is not the existence of this friction, but the fact that – depending on how the story is told – people tend to hold wildly different opinions about what is right and wrong.

Pushing someone to their death with your bare hands is deeply problematic psychologically, even if you accept that it’s theoretically no better or worse than killing them from 10 miles away. Meanwhile, allowing someone at a distance – a starving child in another country for example – to die through one’s inaction seems barely to register a qualm. As philosophers such as Peter Singer have persuasively argued, it’s hard to see why we should accept this.
 

If a robot programmed with Asimov's Three Laws of Robotics were confronted with the trolley problem, what would the robot do? There are long threads dedicated to just this question.

Lots of people have already foreseen this core ethical problem with self-driving cars. I haven't seen any consensus on a solution, though. Not an easy problem, but one that we now have to wrestle with as a society.

Or, at least, some people will have to wrestle with the problem. Frankly, I'm happy today when my Roomba doesn't get itself stuck during one of its cleaning sessions.

Adventures in teaching self-driving cars

For complicated moves like that, Thrun’s team often started with machine learning, then reinforced it with rule-based programming—a superego to control the id. They had the car teach itself to read street signs, for instance, but they underscored that knowledge with specific instructions: “stop” means stop. If the car still had trouble, they’d download the sensor data, replay it on the computer, and fine-tune the response. Other times, they’d run simulations based on accidents documented by the National Highway Traffic Safety Administration. A mattress falls from the back of a truck. Should the car swerve to avoid it or plow ahead? How much advance warning does it need? What if a cat runs into the road? A deer? A child? These were moral questions as well as mechanical ones, and engineers had never had to answer them before. The darpa cars didn’t even bother to distinguish between road signs and pedestrians—or “organics,” as engineers sometimes call them. They still thought like machines.

Four-way stops were a good example. Most drivers don’t just sit and wait their turn. They nose into the intersection, nudging ahead while the previous car is still passing through. The Google car didn’t do that. Being a law-abiding robot, it waited until the crossing was completely clear—and promptly lost its place in line. “The nudging is a kind of communication,” Thrun told me. “It tells people that it’s your turn. The same thing with lane changes: if you start to pull into a gap and the driver in that lane moves forward, he’s giving you a clear no. If he pulls back, it’s a yes. The car has to learn that language.”

From Burkhard Bilger's New Yorker piece on Google's self-driving car. The engineering issues they've had to deal with are fascinating.

As many have noted, legal or regulatory risk may be the largest obstacle to seeing self-driving cars on our roads in volume. To counter that, I hypothesize that all self-driving will ship with a black box, like airplanes, and that all the cameras will record a continuous feed of video, that keeps overwriting itself, maybe a loop of the most recent 30 minutes of driving at all times, along with key sensor readings. That way if someone sees the self-driving sensor on a car they can't just back into the self-driving car or hurtle themselves across a windshield just to get a big settlement from Google.

In fact, as sensors and video recording devices come down in cost, it may become law that all cars come with such accessories, self-driving or not, making it much easier to determine fault in car accidents. The same cost/weight improvements in video tech may make it so Amazon drones are also equipped with a continuously recording video camera, the better for determining who may have brought it down with a rock to steal its payload.

Perhaps Google will take the continuous video feeds as a crowd-sourced way to update its street maps. That leads, of course, to the obvious drawback to such a scenario, the privacy concerns over how Google would use the data and video from the cars. That's a cultural issue and seems more tenable than the legal one, however.

The law and self-driving cars

Robocar accidents (and AI and robotics in general) bring a whole new way of looking at the law. Generally, the law exists to deter and punish bad activity done by humans who typically knew the law, and knew they were doing something unsafe or nefarious. It is meaningless to punish robots, but in punishing the people and companies who make them, it will likely be the case that they did everything they could to stay within the law and held no ill will.

If a robocar (or its occupant or maker) ever gets a correct ticket for violating the vehicle code or other law, this will be a huge event for the team who made it. They'll be surprised, and they'll immediately work to fix whatever flaw caused that to happen. While software updates will not be instantaneous, soon that fix will be downloaded to all vehicles. All competitors will check their own systems to make sure they haven't made the same mistake, and they will also fix things if they need to.

As such, all robocars, as a group, will never get the same ticket again.

This is very much unlike the way humans work. When the first human got a ticket for an unsafe lane change, this didn't stop all the other people from making unsafe lane changes. At best, hearing about how expensive the ticket was will put a slight damper on things. Laws exist because humans can't be trusted not to break them, and there must be a means to stop them.

This suggests an entirely different way of looking at the law. Most of the vehicle code is there because humans can't be trusted to follow the general principles behind the code -- to stay safe, and to not unfairly impede the path of others and keep traffic flowing. There are hundreds of millions of drivers, each with their own personalities, motives and regard or disregard for those principles and the law.

In the robocar world, there will probably not be more than a couple of dozen distinct "drivers" in a whole nation. You could literally get the designers of all these systems together in a room, and work out any issues of how to uphold the principles of the road.

Much more here on the legal complexities surrounding self-driving cars. I find the topic more interesting than, say, the social implications of Google Glasses. More important, too.