Notch one more for computers

In May, 1997, I.B.M.’s Deep Blue supercomputer prevailed over Garry Kasparov in a series of six chess games, becoming the first computer to defeat a world-champion chess player. Two months later, the Times offered machines another challenge on behalf of a wounded humanity: the two-thousand-year-old Chinese board game wei qi, known in the West as Go. The article said that computers had little chance of success: “It may be a hundred years before a computer beats humans at Go—maybe even longer.”

Last March, sixteen years later, a computer program named Crazy Stone defeated Yoshio Ishida, a professional Go player and a five-time Japanese champion. The match took place during the first annual Densei-sen, or “electronic holy war,” tournament, in Tokyo, where the best Go programs in the world play against one of the best humans. Ishida, who earned the nickname “the Computer” in the nineteen-seventies because of his exact and calculated playing style, described Crazy Stone as “genius.”
 

Computers overtake humans in yet another field. After Deep Blue prevailed over Kasparov in chess, humans turned to the game of Go for solace. Here was a game, it was said, that humans would dominate in for quite some time. It turned out to not be much time at all.

Coulom’s Crazy Stone program was the first to successfully use what are known as “Monte Carlo” algorithms, initially developed seventy years ago as part of the Manhattan Project. Monte Carlo, like its casino namesake, the roulette wheel, depends on randomness to simulate possible worlds: when considering a move in Go, it starts with that move and then plays through hundreds of millions of random games that might follow. The program then selects the move that’s most likely to lead to one of its simulated victories. Google’s Norvig explained to me why the Monte Carlo algorithms were such an important innovation: “We can’t cut off a Go game after twenty moves and say who is winning with any certainty. So we use Monte Carlo and play the game all the way to the end. Then we know for sure who won. We repeat that process millions of times, and each time the choices we make along the way are better because they are informed by the successes or failures of the previous times.”

Crazy Stone won the first tournament it entered. Monte Carlo has since become the de facto algorithm for the best computer Go programs, quickly outpacing earlier, proverb-based software. The better the programs got, the less they resembled how humans play: during the game with Ishida, for example, Crazy Stone played through, from beginning to end, approximately three hundred and sixty million randomized games. At this pace, it takes Crazy Stone just a few days to play more Go games than humans collectively ever have.
 

Well, at least we still have Arimaa.

The trolley problem and self-driving cars

The trolley problem is a famous thought experiment in philosophy.

You are walking near a trolley-car track when you notice five people tied to it in a row. The next instant, you see a trolley hurtling toward them, out of control. A signal lever is within your reach; if you pull it, you can divert the runaway trolley down a side track, saving the five — but killing another person, who is tied to that spur. What do you do? Most people say they would pull the lever: Better that one person should die instead of five.
 
Now, a different scenario. You are on a footbridge overlooking the track, where five people are tied down and the trolley is rushing toward them. There is no spur this time, but near you on the bridge is a chubby man. If you heave him over the side, he will fall on the track and his bulk will stop the trolley. He will die in the process. What do you do? (We presume your own body is too svelte to stop the trolley, should you be considering noble self-sacrifice.)

In numerical terms, the two situations are identical. A strict utilitarian, concerned only with the greatest happiness of the greatest number, would see no difference: In each case, one person dies to save five. Yet people seem to feel differently about the “Fat Man” case. The thought of seizing a random bystander, ignoring his screams, wrestling him to the railing and tumbling him over is too much. Surveys suggest that up to 90 percent of us would throw the lever in “Spur,” while a similar percentage think the Fat Man should not be thrown off the bridge. Yet, if asked, people find it hard to give logical reasons for this choice. Assaulting the Fat Man just feels wrong; our instincts cry out against it.

Nothing intrigues philosophers more than a phenomenon that seems simultaneously self-evident and inexplicable. Thus, ever since the moral philosopher Philippa Foot set out Spur as a thought experiment in 1967, a whole enterprise of “trolley­ology” has unfolded, with trolleyologists generating ever more fiendish variants.
 

There are entire books devoted entirely to the subject, including the humorously titled The Trolley Problem, or Would You Throw the Fat Guy Off the Bridge: A Philosophical Conundrum or the similarly named Would You Kill the Fat Man?: The Trolley Problem and What Your Answer Tells Us about Right and Wrong. If the obese don't have enough problems, they also stumble into philosophical quandaries merely by walking across bridges at inopportune moments.

In the abstract, the trolley problem can seem frivolous. In the real world, however, such dilemmas can prove very real and complex. Just around the corner lurks a technological breakthrough which will force us to confront the trolley problem once again: the self-driving car.

Say you're sitting by yourself in your self-driving car, just playing aimlessly on your phone while your car handles the driving duties, when suddenly a mother and child step out in front of the car from between two parked cars on the side of the road. The self-driving car doesn't have enough time to brake, and if it swerves to avoid the mother and child, the car will fly off a bridge and throw you to certain death. What should the car's driving software be programmed to do in that situation?

That problem is the subject of an article in Aeon on automated ethics.

A similar computer program to the one driving our first tram would have no problem resolving this. Indeed, it would see no distinction between the cases. Where there are no alternatives, one life should be sacrificed to save five; two lives to save three; and so on. The fat man should always die – a form of ethical reasoning called consequentialism, meaning conduct should be judged in terms of its consequences.

When presented with Thomson’s trolley problem, however, many people feel that it would be wrong to push the fat man to his death. Premeditated murder is inherently wrong, they argue, no matter what its results – a form of ethical reasoning called deontology, meaning conduct should be judged by the nature of an action rather than by its consequences.

The friction between deontology and consequentialism is at the heart of every version of the trolley problem. Yet perhaps the problem’s most unsettling implication is not the existence of this friction, but the fact that – depending on how the story is told – people tend to hold wildly different opinions about what is right and wrong.

Pushing someone to their death with your bare hands is deeply problematic psychologically, even if you accept that it’s theoretically no better or worse than killing them from 10 miles away. Meanwhile, allowing someone at a distance – a starving child in another country for example – to die through one’s inaction seems barely to register a qualm. As philosophers such as Peter Singer have persuasively argued, it’s hard to see why we should accept this.
 

If a robot programmed with Asimov's Three Laws of Robotics were confronted with the trolley problem, what would the robot do? There are long threads dedicated to just this question.

Lots of people have already foreseen this core ethical problem with self-driving cars. I haven't seen any consensus on a solution, though. Not an easy problem, but one that we now have to wrestle with as a society.

Or, at least, some people will have to wrestle with the problem. Frankly, I'm happy today when my Roomba doesn't get itself stuck during one of its cleaning sessions.

Advice for a happy life

By Charles Murray:

But should you assume that marriage is still out of the question when you're 25? Twenty-seven? I'm not suggesting that you decide ahead of time that you will get married in your 20s. You've got to wait until the right person comes along. I'm just pointing out that you shouldn't exclude the possibility. If you wait until your 30s, your marriage is likely to be a merger. If you get married in your 20s, it is likely to be a startup.

Merger marriages are what you tend to see on the weddings pages of the Sunday New York Times: highly educated couples in their 30s, both people well on their way to success. Lots of things can be said in favor of merger marriages. The bride and groom may be more mature, less likely to outgrow each other or to feel impelled, 10 years into the marriage, to make up for their lost youth.

But let me put in a word for startup marriages, in which the success of the partners isn't yet assured. The groom with his new architecture degree is still designing stairwells, and the bride is starting her third year of medical school. Their income doesn't leave them impoverished, but they have to watch every penny.

What are the advantages of a startup marriage? For one thing, you will both have memories of your life together when it was all still up in the air. You'll have fun remembering the years when you went from being scared newcomers to the point at which you realized you were going to make it.

...

Marry someone with similar tastes and preferences. Which tastes and preferences? The ones that will affect life almost every day.

It is OK if you like the ballet and your spouse doesn't. Reasonable people can accommodate each other on such differences. But if you dislike each other's friends, or don't get each other's senses of humor or—especially—if you have different ethical impulses, break it off and find someone else.

Personal habits that you find objectionable are probably deal-breakers. Jacques Barzun identified the top three as punctuality, orderliness and thriftiness. It doesn't make any difference which point of the spectrum you're on, he observed: "Some couples are very happy living always in debt, always being late, and finding leftover pizza under a sofa cushion." You just have to be at the same point on the spectrum. Intractable differences will become, over time, a fingernail dragged across the blackboard of a marriage.
 

And last, but not least, watch Groundhog Day a lot.

Without the slightest bit of preaching, the movie shows the bumpy, unplanned evolution of his protagonist from a jerk to a fully realized human being—a person who has learned to experience deep, lasting and justified satisfaction with life even though he has only one day to work with.

You could learn the same truths by studying Aristotle's "Ethics" carefully, but watching "Groundhog Day" repeatedly is a lot more fun.