The presumption of human error

Naturally, we respect and admire doctors. We believe that health care is scientific. We think of hospitals as places of safety. For all these reasons, it comes as something of a shock to realise that errors still play such a significant role in whether we leave a hospital better or worse, alive or dead.

The National Audit Office estimates that there may be 34,000 deaths annually as a result of patient safety incidents. When he was medical director, Liam Donaldson warned that the chances of dying as a result of a clinical error in hospital are 33,000 times higher than dying in an air crash. This isn’t a problem peculiar to our health-care system. In the United States, errors are estimated to be the third most common cause of deaths in health care, after cancer and heart disease. Globally, there is a one-in-ten chance that, owing to preventable mistakes or oversights, a patient will leave a hospital in a worse state than when she entered it.

There are other industries where mistakes carry grave consequences, but the mistakes of doctors carry a particular moral charge because their job is to make us better, and we place infinite trust in the expectation they
will do so. When you think about it, it’s extraordinary we’re prepared to give a virtual stranger permission to cut us open with a knife and rearrange our insides as we sleep.

Perhaps because of the almost superstitious faith we need to place in surgeons, we hate to think of them as fallible; to think that they perform worse when they are tired, or that some are much better at the job than others, or that hands can slip because of nerves, or that bad decisions get taken because of overconfidence, or stress, or poor communication. But all of these things happen, because doctors are human.
 

What the medical profession can learn from the airline industry about how to protect against human error. A riveting story about how we must build around the assumption of inevitable human error.

The passage above mirrors my own journey towards realizing that not all doctors are infallible. It may seem silly now, but as a child, I had a learned reverence of the medical profession. All the training, all the accreditation, the requirement to address them by a title all their own—“Doctor”—was blinding.

In 1997 I was back in the Bay Area and went to play pickup basketball with some old Stanford classmates at a local gym. A scrimmage game against a group of older Greeks turned heated, as such games are wont to be with the physical release of so much pent up testosterone. On one drive to the basket, I took a hard shove and went flying sideways. I landed on my left food and my left knee flew sideways an opponent's leg that was planted on the ground.

I felt a searing pain immediately and collapsed. Some teammates carried me to the sideline, and my knee immediately started swelling. I'd never felt anything like this before. Something had happened, but I didn't know what.

I crutched my way back to Seattle, stayed on crutches for a few days, and eventually got in to see an ortho. He laid me down, tugged on my leg a bit this way and that, moved my leg around, and gave me a comforting diagnosis. It was a mild sprain, I could resume light physical activity after the swelling subsided.

At the gym, on an elliptical trainer, something didn't feel right. But the doctor had done some tests, who was I to question him? The web existed, but it was much sparser than it is now. WebMD and sites like that didn't exist.

Back then, Amazon's unofficial official company sport was broomball. The popular stereotypes of technology companies being populated with a bunch of meek, gaunt, sun-deprived software developers neglect the army of MBA's with their world-conquering ambitions, the ex college jocks in business development, the crazy endurance athletes whose motor played on the field or in the office. We played at company functions, and the games felt like some form of trial by combat.

My knee still felt off, but I wasn't about to miss out on our team's broomball contest. We played on a muddy field, it was like some form of field hockey minus pads. You just had to accept that you'd leave battered, your shins a mess of bruises. We taped tennis balls to the ends of our broomsticks so as not to take out anyone's eyes.

From the start I couldn't move that well, so I hung back to play defense. And then an opponent broke loose, a herd of people chasing him, and I moved to intercept the ball. At best, with all the momentum he'd built up, I hoped to deflect the ball horizontally to give the rest of my team a chance to catch up and reset.

As I moved diagonally to meet the path of the ball, he tried to make a sharp cut, but on the muddy field, he couldn't turn enough, and both ball and opponent came sideways and collided with me.

My left leg experienced what the doctor would later call a pivot shift, where the top and lower leg came out of alignment. I fell to the ground screaming. My day was over, and I don't remember now how I drove myself home considering it was a manual transmission.

I found myself back in that same ortho's office a day later, and I told him something wasn't right, to check me again. This time, he consented to perform an MRI.

When the results came back, he was almost sheepish in sharing the news. Though he'd performed the standard Lachman Test and some other tests the last time I'd come in, in fact I did have a torn ACL. I'd been running around for weeks without my left ACL.

Needless to say, I didn't let that ortho perform my ACL reconstruction.

The first time I visited, was the ortho hesitant to order an MRI because of the expense, because I was on an HMO? Or did he just not perform the Lachman test properly? It still haunts me, but the lasting consequence was the shattering of my belief in the infallibility of doctors. I still have deep respect for the medical profession, my brother and his wife are both doctors whom I turn to again and again for advice, but nothing about medical training magically removes human error from day to day life.

 

If the severity of Elaine’s condition in those crucial minutes wasn’t registered by the doctors, it was noticed by others in the room. The nurses saw Elaine’s erratic breathing; the blueness of her face; the swings in her blood pressure; the lowness of her oxygen levels and the convulsions of her body. They later said that they had been surprised when the doctors didn’t attempt to gain access to the trachea, but felt unable to broach the subject. Not directly, anyway: one nurse located a tracheotomy set and presented it to the doctors, who didn’t even acknowledge her. Another nurse phoned the intensive-care unit and told them to prepare a bed immediately. When she informed the doctors of her action they looked at her, she said later, as if she was overreacting.

Reading this, you may be incredulous and angry that the doctors could have been so stupid, or so careless. But when the person closest to this event, Martin Bromiley, read Harmer’s report, he responded very differently. His main sensation wasn’t shock, or fury. It was recognition.
 

RELATED: Atul Gawande's great book A Checklist Manifesto.

A theory of jerks

Picture the world through the eyes of the jerk. The line of people in the post office is a mass of unimportant fools; it’s a felt injustice that you must wait while they bumble with their requests. The flight attendant is not a potentially interesting person with her own cares and struggles but instead the most available face of a corporation that stupidly insists you shut your phone. Custodians and secretaries are lazy complainers who rightly get the scut work. The person who disagrees with you at the staff meeting is an idiot to be shot down. Entering a subway is an exercise in nudging past the dumb schmoes.

We need a theory of jerks. We need such a theory because, first, it can help us achieve a calm, clinical understanding when confronting such a creature in the wild. Imagine the nature-documentary voice-over: ‘Here we see the jerk in his natural environment. Notice how he subtly adjusts his dominance display to the Italian restaurant situation…’ And second – well, I don’t want to say what the second reason is quite yet.
 

From Eric Schwitzgebel over at Aeon. He defines a jerk thus:

I submit that the unifying core, the essence of jerkitude in the moral sense, is this: the jerk culpably fails to appreciate the perspectives of others around him, treating them as tools to be manipulated or idiots to be dealt with rather than as moral and epistemic peers. This failure has both an intellectual dimension and an emotional dimension, and it has these two dimensions on both sides of the relationship. The jerk himself is both intellectually and emotionally defective, and what he defectively fails to appreciate is both the intellectual and emotional perspectives of the people around him. He can’t appreciate how he might be wrong and others right about some matter of fact; and what other people want or value doesn’t register as of interest to him, except derivatively upon his own interests. The bumpkin ignorance captured in the earlier use of ‘jerk’ has changed into a type of moral ignorance.
 

At some point in the technology world, it became fashionable to reject the jerk. Perhaps the first meaningful example was the great Netflix culture deck, with its rejection of the “brilliant jerk.” I don't know if any studies led to this movement or whether it was anecdotal, but somewhere along the line, the prevailing strategy shifted from trying to hire and isolate brilliant jerks to just rejecting them outright, perhaps because of a perceived increase in the need for effective collaboration among team members to ship important work.

Superhot

Superhot is a Kickstarter project for a game in which time moves only when you physically move your character. Here's one fascinating analysis:

In Superhot, it’s not that you can distort time exactly – after all, whenever you take a step, your enemies get the same amount of time to take a step themselves. Instead, your brain is running as fast as it likes while (the rest of) your body remains in the same time stream as everything else.

And then it struck me: this might be close to the experience of an emulated brain housed in a regular-sized body.

Let’s say that, in the future, we artificially replicate/emulate human minds on computers. And let’s put an emulated human mind inside a physical, robotic body. The limits on how fast it can think are its hardware and its programming. As technology and processor speeds improve, the “person” could think faster and faster and would experience the outside world as moving slower and slower in comparison.

… but even though you might have a ridiculously high processing speed to think and analyze a situation, your physical body is still bound by the normal laws of physics. Moving your arms or legs requires moving forward in the same stream of time as everyone else. In order to, say, turn your head to look to your left and gather more information, you need to let time pass for your enemies, too.
 

The article concludes that to make the game more physically realistic, you'd have to have place a finite budget of processing power on the character's brain since being able to think indefinitely while the rest of the world stayed frozen would require a near infinite amount of processing power.

The trailer for Lucy, a new action movie starring Scarlett Johansson, posits what might happen if a person could use 100% of their brain instead of just 10%. But we might need far more such brain capacity to achieve what is shown in The Matrix or Superhot. Perhaps that is what it feels like to be God-like, to have so much energy and processing power that you can understand all things that are happening before any time has passed.

At the same blog Measure of Doubt (a brother-sister blog about rationality, science, and philosophy, perhaps the first of its kind), a meditation on why it might be hellish to be The Flash.

The game theory dynamics are complex. It seems like to the extent that you’re competing with others, you want to be faster. To the extent that you’re cooperating/collaborating with others, you want them to be faster. And overarching all of it, there’s a coordination factor in that you don’t want to be too different from others.
 

I was at Amazon when Jeff Bezos banned Powerpoint. Some say it was because he didn't like thinking to be bent and twisted to fit into the tyrannical, atomized slide-by-slide nature of a Powerpoint deck (this is the Tufte argument). But having been in meetings with Jeff, I also think that he was so smart that as soon as anyone started presenting a deck Jeff would just flip ahead in the presentation and finish reading it before you'd even completed presenting your first slide. He would grow bored and impatient waiting for you to unfold some pre-constructed narrative; he'd rather inhale the ideas and get on to debating them.

Writing our ideas out in prose was the only way to get enough of a head start on Jeff's brain to slow him down to the same speed as everyone else. For those dead silent ten minutes while Jeff sat reading your essay, it was as if you were the Flash and time in the world around you had stopped.

Empire and Post-Empire

In 2011, Bret Easton Ellis made waves with an essay about Charlie Sheen in which he coined the terms empire and post-empire.

The people unable to process Sheen’s honesty can’t do this because it’s so unlike the pre-fab way celebrity presented itself within the Empire. Anyone who has put up with the fake rigors of celebrity (or has addiction problems) has got to find a kindred spirit here. The new fact is: if you’re punching a paparazzo, you now look like an old-school loser. If you can’t accept the fact that we’re at the height of an exhibitionistic display culture and that you’re going to be blindsided by TMZ (and humiliated by Harvey Levin, or Chelsea Handler—princess of post-Empire) walking out of a club on Sunset at 2 in the morning trashed, then you’re basically fucked and you should become a travel agent instead of a movie star. Being publicly mocked is part of the game now and you’re a fool if you don’t play along with it and are still enacting the role of humble, grateful celebrity instead of embracing your fucked-up-ness. Gaga’s little monsters, anyone? Not showing up to collect your award at the Razzies for that piece of shit you made? So Empire. This is why Charlie seems saner and funnier than any other celebrity right now. He also makes better jokes about his situation than most worried editorialists or late-night comedians. A lot of it is sheer bad-boy bravado—just saying shit to see how people react, which is very post-Empire—but a lot of it is transparent, and on that level, Sheen is, um, winning. And I’m not sure being fired from Two and a Half Men and having to wear those horrible rockabilly bowling shirts for another two years is, um, losing…
 

In an interview with Vice earlier this year, Ellis was asked to clarify the distinction between Empire and Post-Empire.

Can you explain this empire and post-empire distinction? Because you refer to it a lot.
Empire is the US from roughly WWII to a little after 9/11. It was at the height of its power, its prestige, and its economic worth. Then it lost a lot of those things. In the face of technology and social media, the mask of pride has been slowly eradicated. That empirical attitude of believing you’re better than everyone—that you’re above everything—and trying to give the impression that you have no problems. Post-empire is just about being yourself. It’s showing the reality rather than obscuring things in reams and reams of meaning.

But can you ever present a "real" version of yourself online?
Well, turning yourself into an avatar, at least, is post-empire. That’s a new kind of mask. It’s more playful than hiding your feelings, presenting your best self, and lying if you have to. Unless, of course, you argue that that’s just a whole new form of empire in itself.
 

Ellis's podcast is one of the more consistently interesting ones out there (listening to it is what reminded me of his empire and post-empire missive) though it's always funny to hear him tout his sponsors like Dollar Shave Club and try to detect even the slightest undercurrent of irony.

Why the internet is all cats and lists

The Allen-Alchian theorem explains why places with high-quality produce (Allen and Alchian had in mind apples in Seattle, which is where apples come from in the US) nevertheless do not always get to consume that same high quality (they pointed to the market for apples in New York city, where no apples grow) because of the relative costs faced by consumers in each case (for New York consumers, a high-quality apple, once you account for transportation costs, was actually relatively cheaper than a low-quality apple compared to relative prices in Seattle). Hence the market sent the high-quality apples to New York.

You’re still with me? It’s all about relative costs. When you move something, or impose any fixed cost, the higher-quality item always wins, because it now has a lower relative cost compared to the lower-quality item.

The interesting idea is that this also applies in reverse – namely when we remove a fixed cost. The internet does this: it removes a cost of transport, and it does so equally for high quality and low quality content. Following the Allen-Alchian theorem, this should mean the opposite. Low-quality items are now relatively cheaper and high-quality items are now relatively more expensive. This idea was first explained by Tyler Cowen, but the upshot is that the internet is made of cats.
 

Intriguing. Combine the Allen-Alchian theorem with the death of homepages and the rise of social networks consisting of short bits of text like status updates and tweets and you can probably explain much of why the internet is made up of cats, lists, and linkbait/clickbait.

Of course, we're talking about the average. For those of you with taste, the internet has enabled access to some of the great works of high culture in ways my childhood self couldn't have imagined.