Adventures in teaching self-driving cars

For complicated moves like that, Thrun’s team often started with machine learning, then reinforced it with rule-based programming—a superego to control the id. They had the car teach itself to read street signs, for instance, but they underscored that knowledge with specific instructions: “stop” means stop. If the car still had trouble, they’d download the sensor data, replay it on the computer, and fine-tune the response. Other times, they’d run simulations based on accidents documented by the National Highway Traffic Safety Administration. A mattress falls from the back of a truck. Should the car swerve to avoid it or plow ahead? How much advance warning does it need? What if a cat runs into the road? A deer? A child? These were moral questions as well as mechanical ones, and engineers had never had to answer them before. The darpa cars didn’t even bother to distinguish between road signs and pedestrians—or “organics,” as engineers sometimes call them. They still thought like machines.

Four-way stops were a good example. Most drivers don’t just sit and wait their turn. They nose into the intersection, nudging ahead while the previous car is still passing through. The Google car didn’t do that. Being a law-abiding robot, it waited until the crossing was completely clear—and promptly lost its place in line. “The nudging is a kind of communication,” Thrun told me. “It tells people that it’s your turn. The same thing with lane changes: if you start to pull into a gap and the driver in that lane moves forward, he’s giving you a clear no. If he pulls back, it’s a yes. The car has to learn that language.”

From Burkhard Bilger's New Yorker piece on Google's self-driving car. The engineering issues they've had to deal with are fascinating.

As many have noted, legal or regulatory risk may be the largest obstacle to seeing self-driving cars on our roads in volume. To counter that, I hypothesize that all self-driving will ship with a black box, like airplanes, and that all the cameras will record a continuous feed of video, that keeps overwriting itself, maybe a loop of the most recent 30 minutes of driving at all times, along with key sensor readings. That way if someone sees the self-driving sensor on a car they can't just back into the self-driving car or hurtle themselves across a windshield just to get a big settlement from Google.

In fact, as sensors and video recording devices come down in cost, it may become law that all cars come with such accessories, self-driving or not, making it much easier to determine fault in car accidents. The same cost/weight improvements in video tech may make it so Amazon drones are also equipped with a continuously recording video camera, the better for determining who may have brought it down with a rock to steal its payload.

Perhaps Google will take the continuous video feeds as a crowd-sourced way to update its street maps. That leads, of course, to the obvious drawback to such a scenario, the privacy concerns over how Google would use the data and video from the cars. That's a cultural issue and seems more tenable than the legal one, however.