Book recommendations from Atul Gawande

Who is your favorite novelist of all time? And your favorite novelist writing today?

First, I should confess that while I’m an avid reader of fiction, I’m an amateur. I still have swaths to catch up on. But keeping that in mind, my all-time favorite novelist is Leo Tolstoy. He had this extraordinary capacity to see all the forces coming to bear on people at any given moment — desire, family, culture, history, accident — and to somehow bring the relevance of those forces alive without beating you about the head with it.

My favorite writing today: Hilary Mantel. I have zero interest in historical fiction, let alone historical fiction about the Reign of Terror or the court of Henry VIII. But she has that same Tolstoyan ability to make the odd and faraway worlds her characters inhabit feel like they matter to us. I want my writing about our own world to connect with people at least a little bit the way these writers do — meaning both viscerally and intellectually, and with a recognition of all the forces at work.

Who is your favorite doctor character in fiction?

Dr. Watson in the Sherlock Holmes stories. I’ve been hooked on Sherlock Holmes for years — every story is a kind of diagnosis — and Watson’s is the inviting voice of the entire series. He is intelligent, observant and faithful, the way we want all doctors to be. He is also guileless and naïve, where Holmes is neither, and that is his ultimate limitation in each mystery. But his lack of cunning is why we trust him — and why Holmes does, too.

It's no surprise Atul Gawande is as well read as he is. Great writers seem, without exception, to be great readers.

Ethics of fighting Ebola

I can't think of too many people better qualified to break down the ethics of fighting Ebola than Peter Singer.

In this respect, Ebola is – or, rather, was – an example of what is sometimes referred to as the 90/10 rule: 90% of medical research is directed toward illnesses that comprise only 10% of the global burden of disease. The world has known about the deadly nature of the Ebola virus since 1976; but, because its victims were poor, pharmaceutical companies had no incentive to develop a vaccine. Indeed, pharmaceutical companies could expect to earn more from a cure for male baldness.

Government research funds in affluent countries are also disproportionately targeted toward the diseases that kill these countries’ citizens, rather than toward diseases like malaria and diarrhea that are responsible for much greater loss of life.

The most accurate way to judge the efficacy of a vaccine is through a double blind trial. One group of patients suffering from the malady are given the potential vaccine, the other set a placebo, and neither the doctors nor patients know who received what. When dealing with a shortage of vaccines and a disease as deadly as Ebola, the usual rules may not apply. That may be okay.

But, when facing a disease that kills up to 70% of those who are infected, and no accepted treatment yet exists, patients could reasonably refuse consent to a trial in which they might receive a placebo, rather than an experimental treatment that offers some hope of recovery. In such cases, it might be more ethical to monitor carefully the outcomes of different treatment centers now, before experimental treatments become available, and then compare these outcomes with those achieved by the same centers after experimental treatments are introduced. Unlike in a randomized trial, no one would receive a placebo, and it should still be possible to detect which treatments are effective.

Walking vs running a mile: the caloric output

To use the above, simply multiply your weight (in pounds) by the number shown. For example, if you weigh 188 lbs, you will burn about 107 calories (188 * .57) when you walk a mile, and about 135 calories (188 * .72) when you run a mile.

As you can see, running a mile burns roughly 26 percent more calories than walking a mile. Running a minute (or 30 minutes, or an hour, etc.) burns roughly 2.3 times more calories than the same total time spent walking.

Okay, now a few caveats. These calculations are all derived from an “average” weight of the subjects; there may be individual variations. Also, age and gender make a difference, though quite a modest one. Your weight is by far the biggest determinant of your calorie burn per mile. When you look at per-minute burn, your pace (your speed) also makes a big difference.

These calculations aren't meant to be precise. They are good approximations, and much more accurate than the old chestnut: You burn 100 calories per mile.

Walking a mile does not burn as many calories as running a mile.

Reading any tables of calories burned per minute for any activity is sobering. Running only burns 11 to 16 calories a minute. The painful truth is that modern society provides many fast and easy ways to add calories but few ways to burn them off quickly.

The age of surplus: we are drowning in both information and food.

Trick to fixing every TV show and movie

From Vulture, Here Is the Simple Mind-Trick That Makes Every Movie and TV Show Seem Better.

It's a gag, and yet, trying to figure out the logical inflection point in every TV show and movie (once you read the article you'll understand what I mean) might be a way of pinpointing when many of those programs went off the rails. Entering some character's fever dream usually marks an end to any narrative suspense.

Serial and White Reporter Privilege

Also in the second episode of Serial, Koenig reads passages from Hae’s diary. Koenig notes, “Her diary, by the way—well I’m not exactly sure what I expected her diary to be like but—it’s such a teenage girls diary.” (My emphasis added.) This statement seems to suggest a colorblind ideal: In Koenig’s Baltimore, kids will be kids, regardless of race or background. But I imagine there are many listeners—especially amongst people of color—who pause and ask, “Wait, what did you expect her diary to be like?” or “Why do you feel the need to point out that a Korean teenage girl’s diary is just like a teenage girl’s diary?” and perhaps, most importantly, “Where does your model for ‘such a teenage girl’s diary’ come from?” These are annoying questions, not only to those who would prefer to mute the nuances of race and identity for the sake of a clean, “relatable” narrative, but also for those of us who have to ask them because Koenig is talking about our communities, and, in large part, getting it wrong.

The accumulation of Koenig’s little judgments throughout the show—and there are many more examples—should feel familiar to anyone who has spent much of her life around well-intentioned white people who believe that equality and empathy can only be achieved through a full, but ultimately bankrupt, understanding of one another’s cultures. Who among us (and here, I’m talking to fellow people of color) hasn’t felt that subtle, discomforting burn whenever the very nice white person across the table expresses fascination with every detail about our families that strays outside of the expected narrative? Who hasn’t said a word like “parameters” and watched, with grim annoyance, as it turns into “immigrant parents?” These are usually silent, cringing moments – it never quite feels worth it to call out the offender because you’ll never convince them that their intentions might not be as good as they think they are.

Koenig does ultimately address Syed’s Muslim faith in Serial, but only to debunk the state’s claim that Syed’s murderous rage came out of cultural factors. The discussion feels remarkably perfunctory—Koenig quickly dispenses with Syed’s race and religion. She seems to want Syed and Lee, by way of her diary, to be, in the words of Ira Glass, “relatable,” which, sadly, in this case, reads “white.” As a result, Chaudry believes Koenig has left out an essential part of Syed’s story—that his arrest, his indictment and his conviction were all influenced by his faith and the color of his skin. “You have an urban jury in Baltimore city, mostly African American, maybe people who identify with Jay [an African-American friend of Syed's who is the state’s seemingly unreliable star witness] more than Adnan, who is represented by a community in headscarves and men in beards,” Chaudry said. “The visuals of the courtroom itself leaves an impression and there’s no escaping the racial implications there.”

I found myself nodding as I read Jay Caspian Kang's excellent piece on the new hit podcast Serial.

The dancing around race throughout Serial has been the most glaring and particular choice in the series. I'm enjoying Serial, it has us all questioning why no one ported the serial genre to the podcasting medium earlier, but the more episodes I hear, the greater my frustration with having my attention in the case narrowly focused by Sarah Koenig's world view, and the more I just want to throw myself into the Serial subreddit, spoilers be damned, and start hearing from a more diverse group of detectives.

And I did, just for a bit. Rabia Chaudry, the civil rights attorney who originally reached out to Koenig to see if she might be interested in the case, posted a link there to a piece he just wrote about episode 8: Confirmation Bias FTW:

Raise your hand if you were surprised by what Jay had to say in this week’s episode. No one better have their hand raised. If you thought for an instant that “Mr. Your-Plea-Deal-Is-Good-Unless-You-Change-Your-Story” was going to do another “ok I come clean” when two random women show up at his door, I’ve got a bridge and a mid-east peace plan to sell you. You may have been surprised, however, with how Jay was described. Or you may have been confused. His is a catalog of contradictory personality traits, from goofy to mean, from animal lover to rat-eating-frog enthusiast (sorry, you kind of can’t be both – Google that ish and you’ll see what I mean). Unlike Adnan, who has overwhelmingly been described in similar terms by most people who know him, Jay poses a challenge to us. Other than being identified as the odd guy out, there was little similarity between what people had to say about him. What to make of his conflicted, yet beautiful, unconventionality? Nothing. That’s right. You make nothing of it. Because at this point if you really think you can assess the truth and reality of who a person is through a superficial, carefully edited and crafted, partial but maybe not impartial treatment of his (or any) character in a production, then you will forever be lost in crazy-making cognitive mazes. Is it too much of a stretch to say unless you know someone personally, you can’t really know them? You can’t. Trust me on this. Listeners will never be able to figure out whether Adnan is a sociopath or a nice guy, Jay is a psychopath or a victim, or Sarah is a bewildered glutton for punishment or a master weaver of addictive narrative (come on now). So let’s stop pretending we can psychoanalyze the depths of the souls of these people through 30-40 minute podcasts. If you still think you’re just special that way, I recommend you watch the documentaries “Paradise Lost“, “Paradise Lost 2: Revelations“, and “West of Memphis” and get back to me. A TL;DR of that experience is that you, as the consumer of a show, are at the mercy of the storytellers, second and third hand narrators, and incomplete profiles of people. The only thing you can do in such a situation is try and pin down what you can, make an assessment with a sack of salt, and then forget that assessment the minute a new tidbit of information is revealed.

Like many other listeners to Serial, I've been bracing myself for the possibility of an open-ended conclusion, one in which we never learn whether Adnan was really guilty or not. Even if we find out he's innocent, maybe we never learn who the actual murderer was.

But perhaps we're obsessing too much over the details of one particular case, one which may be unsolvable with the facts at hand. The greater legacy of the podcast may be the exposure of the insidious ubiquity of confirmation bias, nesting in on itself recursively so that it's almost impossible for us to trace back to the origin.

Mathematics of why hipsters all dress the same

Here comes the crucial twist. In all of the examples so far, we assumed that everyone had instant knowledge of what everyone else was wearing. People knew exactly what the mainstream trend was. But in reality, there are always delays. It takes time for a signal to propagate across a brain; likewise it takes time for hipsters to read Complex or Pitchfork or whatever in order to figure out how to be contrarian.

So Touboul included a delay into the model. People would base their decisions not on the current state of affairs, but on the state of affairs some number of turns prior.

What Touboul noticed is that if you increase the delay factor past a certain point, something amazing happens. Out of what appears to be random noise, a pattern emerges. All of the hipsters start to synchronize, and they start to oscillate in unison.

This mathematician must be an early frontrunner for the Nobel Prize.

In all seriousness, though, the model has a certain explanatory elegance, akin to Schelling's Segregation Model.

Human-level AI always 25 years away

Rodney Brooks says recent fears about malevolent, superintelligent AI are not warranted.

I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.  Recent advances in deep machine learning let us teach our machines things like how to distinguish classes of inputs and to fit curves to time data.  This lets our machines “know” whether an image is that of a cat or not, or to “know” what is about to fail as the temperature increases in a particular sensor inside a jet engine.  But this is only part of being intelligent, and Moore’s Law applied to this very real technical advance will not by itself bring about human level or super human level intelligence.  While deep learning may come up with a category of things appearing in videos that correlates with cats, it doesn’t help very much at all in “knowing” what catness is, as distinct from dogness, nor that those concepts are much more similar to each other than to salamanderness.  And deep learning does not help in giving a machine “intent”, or any overarching goals or “wants”.  And it doesn’t help a machine explain how it is that it “knows” something, or what the implications of the knowledge are, or when that knowledge might be applicable, or counterfactually what would be the consequences of that knowledge being false.  Malevolent AI would need all these capabilities, and then some.  Both an intent to do something and an understanding of human goals, motivations, and behaviors would be keys to being evil towards humans.

...

And, there is a further category error that we may be making here.  That is the intellectual shortcut that says computation and brains are the same thing.  Maybe, but perhaps not.

In the 1930′s Turing was inspired by how “human computers”, the people who did computations for physicists and ballistics experts alike, followed simple sets of rules while calculating to produce the first models of abstract computation. In the 1940′s McCullough and Pitts at MIT used what was known about neurons and their axons and dendrites to come up with models of how computation could be implemented in hardware, with very, very abstract models of those neurons.  Brains were the metaphors used to figure out how to do computation.  Over the last 65 years those models have now gotten flipped around and people use computers as the  metaphor for brains.  So much so that enormous resources are being devoted to “whole brain simulations”.  I say show me a simulation of the brain of a simple worm that produces all its behaviors, and then I might start to believe that jumping to the big kahuna of simulating the cerebral cortex of a human has any chance at all of being successful in the next 50 years.  And then only if we are extremely lucky.

In order for there to be a successful volitional AI, especially one that could be successfully malevolent, it would need a direct understanding of the world, it would need to have the dexterous hands and/or other tools that could out manipulate people, and to have a deep understanding of humans in order to outwit them. Each of these requires much harder innovations than a winged vehicle landing on a tree branch.  It is going to take a lot of deep thought and hard work from thousands of scientists and engineers.  And, most likely, centuries.

As Brooks notes, a study of 95 predictions made since 1950 of when human-level AI will be achieved have all predicted that its arrival in 15-25 years. A quarter century seems like the default time frame of choice for humans for predictions of technological advances that are plausible but not imminent.