10 browser tabs

1. Love in the Time of Robots

“Is it difficult to play with her?” the father asks. His daughter looks to him, then back at the android. Its mouth begins to open and close slightly, like a dying fish. He laughs. “Is she eating something?”
 
The girl does not respond. She is patient and obedient and listens closely. But something inside is telling her to resist. 
 
“Do you feel strange?” her father asks. Even he must admit that the robot is not entirely believable.
 
Eventually, after a few long minutes, the girl’s breathing grows heavier, and she announces, “I am so tired.” Then she bursts into tears.
 
That night, in a house in the suburbs, her father uploads the footage to his laptop for posterity. His name is Hiroshi Ishi­guro, and he believes this is the first record of a modern-day android.
 

Reads like the treatment for a science fiction film, some mashup of Frankenstein, Pygmalion, and Narcissus. One incredible moment after another, and I'll grab just a few excerpts, but the whole thing is worth reading.

But he now wants something more. Twice he has witnessed others have the opportunity, however confusing, to encounter their robot self, and he covets that experience. Besides, his daughter was too young, and the newscaster, though an adult, was, in his words, merely an “ordinary” person: Neither was able to analyze their android encounter like a trained scientist. A true researcher should have his own double. Flashing back to his previous life as a painter, Ishi­guro thinks: This will be another form of self-portrait. He gives the project his initials: Geminoid HI. His mechanical twin.
 

Warren Ellis, in a recent commencement speech delivered at the University of Essex, said:

Nobody predicted how weird it’s gotten out here.  And I’m a science fiction writer telling you that.  And the other science fiction writers feel the same.  I know some people who specialized in near-future science fiction who’ve just thrown their hands up and gone off to write stories about dragons because nobody can keep up with how quickly everything’s going insane.  It’s always going to feel like being thrown in the deep end, but it’s not always this deep, and I’m sorry for that.
 

The thing is, far future sci-fi is likely to be even more off base now given how humans are evolving in lock step with the technology around them. So we need more near future sci-fi, of a variety smarter than Black Mirror, to grapple with the implications.

Soon his students begin comparing him to the Geminoid—“Oh, professor, you are getting old,” they tease—and Ishi­guro finds little humor in it. A few years later, at 46, he has another cast of his face made, to reflect his aging, producing a second version of HI. But to repeat this process every few years would be costly and hard on his vanity. Instead, Ishi­guro embraces the logi­cal alternative: to alter his human form to match that of his copy. He opts for a range of cosmetic procedures—laser treatments and the injection of his own blood cells into his face. He also begins watching his diet and lifting weights; he loses about 20 pounds. “I decided not to get old anymore,” says Ishi­guro, whose English is excellent but syntactically imperfect. “Always I am getting younger.”
 
Remaining twinned with his creation has become a compulsion. “Android has my identity,” he says. “I need to be identical with my android, otherwise I’m going to lose my identity.” I think back to another photo of his first double’s construction: Its robot skull, exposed, is a sickly yellow plastic shell with openings for glassy teeth and eyeballs. When I ask what he was thinking as he watched this replica of his own head being assembled, Ishi­guro says, perhaps only half-joking, “I thought I might have this kind of skull if I removed my face.”
 
Now he points at me. “Why are you coming here? Because I have created my copy. The work is important; android is important. But you are not interested in myself.”
 

This should be some science fiction film, only I'm not sure who our great science fiction director is. The best examples may be too old to want to look upon such a story as anything other than grotesque and horrific.

2. Something is wrong on the internet by James Bridle

Of course, some of what's on the internet really is grotesque and horrific. 

Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level. 
 

Given how much my nieces love watching product unwrapping and Peppa the Pig videos on YouTube, this story was induced a sense of dread I haven't felt since the last good horror film I watched, which I can't remember anymore since the world has run a DDOS on my emotions.

We often think of a market operating at peak efficiency as sending information back and forth between supply and demand, allowing the creation of goods that satisfy both parties. In the tech industry, the wink-wink version of that is saying that pornography leads the market for any new technology, solving, as it does, the two problems the internet is said to solve better, at scale, than any medium before it: loneliness and boredom.

Bridle's piece, however, finds the dark cul-de-sacs and infected runaway processes which have branched out from the massive marketplace that is YouTube. I decided to follow a Peppa the Pig video on the service and started tapping on Related Videos, like I imagine one of my nieces doing, and quickly wandered into a dark alleyway where I saw some video which I would not want any of them watching. As Bridle did, I won't link to what I found; suffice to say it won't take you long to stumble on some of it if you want, or perhaps even if you don't.

What's particularly disturbing is the somewhat bizarre, inexplicably grotesque nature of some of these video remixes. David Cronenberg is known for his body horror films; these YouTube videos are like some perverse variant of that, playing with popular children's iconography.

Facebook and now Twitter are taking heat for disseminating fake news, and that is certainly a problem worth debating, but with that problem we're talking about adults. Children don't have the capacity to comprehend what they're seeing, and given my belief in the greater effect of sight, sound, and motion, I am even more disturbed by this phenomenon.

A system where it's free to host videos to a global audience, where this type of trademark infringement weaponizes brand signifiers with seeming impunity, married with increasingly scalable content production and remixes using technology, allows for the type of scalable problem we haven't seen before.

The internet has enabled all types of wonderful things at scale; we should not be surprised that it would foster the opposite. But we can, and should, be shocked.

3. FDA approves first blood sugar monitor without finger pricks

This is exciting. One view which seems to be common wisdom these days when it comes to health is that it's easier to lose weight and impact your health through diet than exercise. But one of the problems of the feedback loop in diet (and exercise, actually) is how slow it is. You sneak a few snacks here and there walking by the company cafeteria every day, and a month later you hop on the scale and emit a bloodcurdling scream as you realize you've gained 8 pounds.

A friend of mine had gestational diabetes during one of her pregnancies and got a home blood glucose monitor. You had to prick your finger and draw blood to get your blood glucose reading, but curious, I tried it before and after a BBQ.

To see what various foods did to my blood sugar in near real-time was a real eye-opener. Imagine in the future when one could see what a few french fries and gummy bears did to your blood sugar, or when the reading could be built into something like an Apple Watch, without having to draw blood each time. I don't mind the sight of blood, but I'd prefer not to turn my finger tips into war zones.

Faster feedback might transform dieting into something more akin to deliberate practice. Given that another popular theory of obesity is that it's an insulin phenomenon, tools like this, built for diabetes, might have much mass market impact.

4.  Ingestable ketones

Ingestable ketones have been a recent sort of holy grail for endurance athletes, and now HVMN is bringing one to market. Ketogenic diets are all the rage right now, but for an endurance athlete, the process of being able to fuel oneself on ketones has always sounded like a long and miserable process.

The body generates ketones from fat when low on carbs or from fasting. The theory is that endurance athletes using ketones rather than glycogen from carbs require less oxygen and thus can work out longer.

I first heard about the possibility of exogenous ketones for athletes from Peter Attia. As he said then, perhaps the hardest thing about ingesting exogenous ketones is the horrible taste, which caused him to gag and nearly vomit in his kitchen. It doesn't sound like the taste problem has been solved.

Until we get the pill that renders exercise obsolete, however, I'm curious to give this a try. If you decide to pre-order, you can use my referral code to get $15 off.

5. We Are Nowhere Close to the Limits of Athletic Performance

By comparison, the potential improvements achievable by doping effort are relatively modest. In weightlifting, for example, Mike Israetel, a professor of exercise science at Temple University, has estimated that doping increases weightlifting scores by about 5 to 10 percent. Compare that to the progression in world record bench press weights: 361 pounds in 1898, 363 pounds in 1916, 500 pounds in 1953, 600 pounds in 1967, 667 pounds in 1984, and 730 pounds in 2015. Doping is enough to win any given competition, but it does not stand up against the long-term trend of improving performance that is driven, in part, by genetic outliers. As the population base of weightlifting competitors has increased, outliers further and further out on the tail of the distribution have appeared, driving up world records.
 
Similarly, Lance Armstrong’s drug-fuelled victory of the 1999 Tour de France gave him a margin of victory over second-place finisher Alex Zulle of 7 minutes, 37 seconds, or about 0.1 percent.3 That pales in comparison to the dramatic secular increase in speeds the Tour has seen over the past half century: Eddy Merckx won the 1971 tour, which was about the same distance as the 1999 tour, in a time 5 percent worse than Zulle’s. Certainly, some of this improvement is due to training methods and better equipment. But much of it is simply due to the sport’s ability to find competitors of ever more exceptional natural ability, further and further out along the tail of what’s possible.
 

In the Olympics, to take the most celebrated athletic competition, victors are celebrated with videos showing them swimming laps, tossing logs in a Siberian tundra, running through a Kenyan desert. We celebrate the work, the training. Good genes are given narrative short shrift. Perhaps we should show a picture of their DNA, just to give credit where much credit is due?

If I live a normal human lifespan, I expect to live to see special sports leagues and divisions created for athletes who've undergone genetic modification in the future. It will be the return of the freak show at the circus, but this time for real. I've sat courtside and seen people like Lebron James, Giannis Antetokounmpo, Kevin Durant, and Joel Embiid walk by me. They are freaks, but genetic engineering might produce someone who stretch our definition of outlier.

In other words, it is highly unlikely that we have come anywhere close to maximum performance among all the 100 billion humans who have ever lived. (A completely random search process might require the production of something like a googol different individuals!)
 
But we should be able to accelerate this search greatly through engineering. After all, the agricultural breeding of animals like chickens and cows, which is a kind of directed selection, has easily produced animals that would have been one in a billion among the wild population. Selective breeding of corn plants for oil content of kernels has moved the population by 30 standard deviations in roughly just 100 generations.6 That feat is comparable to finding a maximal human type for a specific athletic event. But direct editing techniques like CRISPR could get us there even faster, producing Bolts beyond Bolt and Shaqs beyond Shaq.
 

6. Let's set half a percent as the standard for statistical significance

My many-times-over coauthor Dan Benjamin is the lead author on a very interesting short paper "Redefine Statistical Significance." He gathered luminaries from many disciplines to jointly advocate a tightening of the standards for using the words "statistically significant" to results that have less than a half a percent probability of occurring by chance when nothing is really there, rather than all results that—on their face—have less than a 5% probability of occurring by chance. Results with more than a 1/2% probability of occurring by chance could only be called "statistically suggestive" at most. 
 
In my view, this is a marvelous idea. It could (a) help enormously and (b) can really happen. It can really happen because it is at heart a linguistic rule. Even if rigorously enforced, it just means that editors would force people in papers to say "statistically suggestive for a p of a little less than .05, and only allow the phrase "statistically significant" in a paper if the p value is .005 or less. As a well-defined policy, it is nothing more than that. Everything else is general equilibrium effects.
 

Given the replication crisis has me doubting almost every piece of conventional wisdom I've inherited in my life, I'm okay with this.

7. We're surprisingly unaware of when our own beliefs change

If you read an article about a controversial issue, do you think you’d realise if it had changed your beliefs? No one knows your own mind like you do – it seems obvious that you would know if your beliefs had shifted. And yet a new paper in The Quarterly Journal of Experimental Psychology suggests that we actually have very poor “metacognitive awareness” of our own belief change, meaning that we will tend to underestimate how much we’ve been swayed by a convincing article.
 
The researchers Michael Wolfe and Todd Williams at Grand Valley State University said their findings could have implications for the public communication of science. “People may be less willing to meaningfully consider belief inconsistent material if they feel that their beliefs are unlikely to change as a consequence,” they wrote.
 

Beyond being an interesting result, I link to this as an example of a human readable summary of a research paper. This his how this article summarize the research study and its results:

The researchers recruited over two hundred undergrads across two studies and focused on their beliefs about whether the spanking/smacking of kids is an effective form of discipline. The researchers chose this topic deliberately in the hope the students would be mostly unaware of the relevant research literature, and that they would express a varied range of relatively uncommitted initial beliefs.
 
The students reported their initial beliefs about whether spanking is an effective way to discipline a child on a scale from “1” completely disbelieve to “9” completely believe. Several weeks later they were given one of two research-based texts to read: each was several pages long and either presented the arguments and data in favour of spanking or against spanking. After this, the students answered some questions to test their comprehension and memory of the text (these measures varied across the two studies). Then the students again scored their belief in whether spanking is effective or not (using the same 9-point scale as before). Finally, the researchers asked them to recall what their belief had been at the start of the study.
 
The students’ belief about spanking changed when they read a text that argued against their own initial position. Crucially, their memory of their initial belief was shifted in the direction of their new belief – in fact, their memory was closer to their current belief than their original belief. The more their belief had changed, the larger this memory bias tended to be, suggesting the students were relying on their current belief to deduce their initial belief. The memory bias was unrelated to the measures of how well they’d understood or recalled the text, suggesting these factors didn’t play a role in memory of initial belief or awareness of belief change.
 

Compare this link above to the abstract of the paper itself:

When people change beliefs as a result of reading a text, are they aware of these changes? This question was examined for beliefs about spanking as an effective means of discipline. In two experiments, subjects reported beliefs about spanking effectiveness during a prescreening session. In a subsequent experimental session, subjects read a one-sided text that advocated a belief consistent or inconsistent position on the topic. After reading, subjects reported their current beliefs and attempted to recollect their initial beliefs. Subjects reading a belief inconsistent text were more likely to change their beliefs than those who read a belief consistent text. Recollections of initial beliefs tended to be biased in the direction of subjects’ current beliefs. In addition, the relationship between the belief consistency of the text read and accuracy of belief recollections was mediated by belief change. This belief memory bias was independent of on-line text processing and comprehension measures, and indicates poor metacognitive awareness of belief change.
 

That's actually one of the better research abstracts you'll read and still it reflects the general opacity of the average research abstract. I'd argue that some of the most important knowledge in the world is locked behind abstruse abstracts.

Why do researchers write this way? Most tell me that researchers write for other researchers, and incomprehensible prose like this impresses their peers. What a tragedy. As my longtime readers know, I'm a firm believer in the power of the form of a message. We continue to underrate that in all aspects of life, from the corporate world to our personal lives, and here, in academia.

Then again, such poor writing keeps people like Malcolm Gladwell busy transforming such insight into breezy reads in The New Yorker and his bestselling books.

8. Social disappointment explains chimpanzees' behaviour in the inequity aversion task

As an example of the above phenomenon, this paper contains an interesting conclusion, but try to parse this abstract:

Chimpanzees’ refusal of less-preferred food when an experimenter has previously provided preferred food to a conspecific has been taken as evidence for a sense of fairness. Here, we present a novel hypothesis—the social disappointment hypothesis—according to which food refusals express chimpanzees' disappointment in the human experimenter for not rewarding them as well as they could have. We tested this hypothesis using a two-by-two design in which food was either distributed by an experimenter or a machine and with a partner present or absent. We found that chimpanzees were more likely to reject food when it was distributed by an experimenter rather than by a machine and that they were not more likely to do so when a partner was present. These results suggest that chimpanzees’ refusal of less-preferred food stems from social disappointment in the experimenter and not from a sense of fairness.
 

Your average grade school English teacher would slap a failing grade on this butchery of the English language.

9. Metacompetition: Competing Over the Game to be Played

When CDMA-based technologies took off in the US, companies like QualComm that work on that standard prospered; metacompetitions between standards decide the fates of the firms that adopt (or reject) those standards.

When an oil spill raises concerns about the environment, consumers favor businesses with good environmental records; metacompetitions between beliefs determine the criteria we use to evaluate whether a firm is “good.”

If a particular organic foods certification becomes important to consumers, companies with that certification are favored; metacompetitions between certifications determines how the quality of firms is measured.
 
In all these examples, you could be the very best at what you do, but lose in the metacompetition over what criteria will matter. On the other hand, you may win due to a metacompetition that protects you from fierce rivals who play a different game.
 
Great leaders pay attention to metacompetition. They advocate the game they play well, promoting criteria on which they measure up. By contrast, many failed leaders work hard at being the best at what they do, only to throw up their hands in dismay when they are not even allowed to compete. These losers cannot understand why they lost, but they have neglected a fundamental responsibility of leadership. It is not enough to play your game well. In every market in every country, alternative “logics” vie for prominence. Before you can win in competition, you must first win the metacompetition over the game being played.
 

In sports negotiations between owners and players, the owners almost always win the metacompetition game. In the writer's strike in Hollywood in 2007, the writer's guild didn't realize they were losing the metacompetition and thus ended up worse off than before. Amazon surpassed eBay by winning the retail metacompetition (most consumers prefer paying a good, fixed price for a good of some predefined quality than dealing with the multiple axes of complexity of an auction) after first failing at tackling eBay on its direct turf of auctions.

Winning the metacompetition means first being aware of what it is. It's not so easy in a space like, say, social networking, where even some of the winners don't understand what game they're playing.

10. How to be a Stoic

Much of Epictetus’ advice is about not getting angry at slaves. At first, I thought I could skip those parts. But I soon realized that I had the same self-recriminatory and illogical thoughts in my interactions with small-business owners and service professionals. When a cabdriver lied about a route, or a shopkeeper shortchanged me, I felt that it was my fault, for speaking Turkish with an accent, or for being part of an élite. And, if I pretended not to notice these slights, wasn’t I proving that I really was a disengaged, privileged oppressor? Epictetus shook me from these thoughts with this simple exercise: “Starting with things of little value—a bit of spilled oil, a little stolen wine—repeat to yourself: ‘For such a small price, I buy tranquillity.’ ”
 
Born nearly two thousand years before Darwin and Freud, Epictetus seems to have anticipated a way out of their prisons. The sense of doom and delight that is programmed into the human body? It can be overridden by the mind. The eternal war between subconscious desires and the demands of civilization? It can be won. In the nineteen-fifties, the American psychotherapist Albert Ellis came up with an early form of cognitive-behavioral therapy, based largely on Epictetus’ claim that “it is not events that disturb people, it is their judgments concerning them.” If you practice Stoic philosophy long enough, Epictetus says, you stop being mistaken about what’s good even in your dreams.
 

The trendiness of stoicism has been around for quite some time now. I found this tab left over from 2016, and I'm sure Tim Ferriss was espousing it long before then, and not to mention the enduring trend that is Buddhism. That meditation and stoicism are so popular in Silicon Valley may be a measure of the complacency of the region; these seem direct antidotes to the most first world of problems. People everywhere complain of the stresses on their mind from the deluge of information they receive for free from apps on the smartphone with processing power that would put previous supercomputers to shame.

Still, given that stoicism was in vogue in Roman times, it seems to have stood the test of time. Since social media seems to have increased the surface area of our social fabric and our exposure to said fabric, perhaps we could all use a bit more stoicism in our lives. I suspect one reason Curb Your Enthusiasm curdles in the mouth more than before is not just that his rich white man's complaints seem particularly ill timed in the current environment but that he is out of touch with the real nature of most people's psychological stressors now. A guy of his age and wealth probably doesn't spend much time on social media, but if he did, he might realize his grievances no longer match those of the average person in either pettiness or peculiarity.

Chasm of comprehension

Last year, Google's AI AlphaGo beat Korean Lee Sedol in Go, a game many expected humans to continue to dominate for years, if not decades, to come.

With the 37th move in the match’s second game, AlphaGo landed a surprise on the right-hand side of the 19-by-19 board that flummoxed even the world’s best Go players, including Lee Sedol. “That’s a very strange move,” said one commentator, himself a nine dan Go player, the highest rank there is. “I thought it was a mistake,” said the other. Lee Sedol, after leaving the match room, took nearly fifteen minutes to formulate a response. Fan Gui—the three-time European Go champion who played AlphaGo during a closed-door match in October, losing five games to none—reacted with incredulity. But then, drawing on his experience with AlphaGo—he has played the machine time and again in the five months since October—Fan Hui saw the beauty in this rather unusual move
 
Indeed, the move turned the course of the game. AlphaGo went on to win Game Two, and at the post-game press conference, Lee Sedol was in shock. “Yesterday, I was surprised,” he said through an interpreter, referring to his loss in Game One. “But today I am speechless. If you look at the way the game was played, I admit, it was a very clear loss on my part. From the very beginning of the game, there was not a moment in time when I felt that I was leading.”
 

The first time Gary Kasparov sensed deep intelligence in Deep Blue, he described the computer's move as a very human one

I GOT MY FIRST GLIMPSE OF ARTIFICIAL INTELLIGENCE ON Feb. 10, 1996, at 4:45 p.m. EST, when in the first game of my match with Deep Blue, the computer nudged a pawn forward to a square where it could easily be captured. It was a wonderful and extremely human move. If I had been playing White, I might have offered this pawn sacrifice. It fractured Black's pawn structure and opened up the board. Although there did not appear to be a forced line of play that would allow recovery of the pawn, my instincts told me that with so many "loose" Black pawns and a somewhat exposed Black king, White could probably recover the material, with a better overall position to boot.
 
But a computer, I thought, would never make such a move. A computer can't "see" the long-term consequences of structural changes in the position or understand how changes in pawn formations may be good or bad.
 
Humans do this sort of thing all the time. But computers generally calculate each line of play so far as possible within the time allotted. Because chess is a game of virtually limitless possibilities, even a beast like Deep Blue, which can look at more than 100 million positions a second, can go only so deep. When computers reach that point, they evaluate the various resulting positions and select the move leading to the best one. And because computers' primary way of evaluating chess positions is by measuring material superiority, they are notoriously materialistic. If they "understood" the game, they might act differently, but they don't understand.
 
So I was stunned by this pawn sacrifice. What could it mean? I had played a lot of computers but had never experienced anything like this. I could feel--I could smell--a new kind of intelligence across the table. While I played through the rest of the game as best I could, I was lost; it played beautiful, flawless chess the rest of the way and won easily.
 

Later, in the Kasparov-Deep Blue rematch that IBM's computer won, again a move in the 2nd game was pivotal. There is debate or whether the move was a mistake or intentional on the part of the computer, but it flummoxed Kasparov (italics mine):

'I was not in the mood of playing at all,'' he said, adding that after Game 5 on Saturday, he had become so dispirited that he felt the match was already over. Asked why, he said: ''I'm a human being. When I see something that is well beyond my understanding, I'm afraid.''
 
...
 

At the news conference after the game, a dark-eyed and brooding champion said that his problems began after the second game, won by Deep Blue after Mr. Kasparov had resigned what was eventually shown to be a drawn position. Mr. Kasparov said he had missed the draw because the computer had played so brilliantly that he thought it would have obviated the possibility of the draw known as perpetual check.

''I do not understand how the most powerful chess machine in the world could not see simple perpetual check,'' he said. He added he was frustrated by I.B.M.'s resistance to allowing him to see the printouts of the computer's thought processes so he could understand how it made its decisions, and implied again that there was some untoward behavior by the Deep Blue team.

Asked if he was accusing I.B.M. of cheating, he said: ''I have no idea what's happening behind the curtain. Maybe it was an outstanding accomplishment by the computer. But I don't think this machine is unbeatable.''

Mr. Kasparov, who defeated a predecessor of Deep Blue a year ago, won the first game of this year's match, but it was his last triumph, a signal that the computer's pattern of thought had eluded him. He couldn't figure out what its weaknesses were, or if he did, how to exploit them.

Legend has it that a move in Game One and another in Game Two were actually just programming glitches that caused Deep Blue to make random moves that threw Kasparov off, but regardless, the theme is the same: at some point he no longer understood what the program was doing. He no longer had a working mental model, like material advantage, for his computer opponent.

This year, a new version of AlphaGo was unleashed on the world: AlphaGo Zero.

As many will remember, AlphaGo—a program that used machine learning to master Go—decimated world champion Ke Jie earlier this year. Then, the program’s creators at Google’s DeepMind let the program continue to train by playing millions of games against itself. In a paper published in Nature earlier this week, DeepMind revealed that a new version of AlphaGo (which they christened AlphaGo Zero) picked up Go from scratch, without studying any human games at all. AlphaGo Zero took a mere three days to reach the point where it was pitted against an older version of itself and won 100 games to zero.
 

(source)

That AlphaGo Zero had nothing to learn from playing the world's best humans, and that it trounced its artificial parent 100-0, is evolutionary velocity of a majesty not seen since the ectomorphs in the Alien movie franchise. It is also, in its arrogance, terrifying.

DeepMind released 55 games that a previous version of AlphaGo played against itself for Go players around the world to analyze.

Since May, experts have been painstakingly analyzing the 55 machine-versus-machine games. And their descriptions of AlphaGo’s moves often seem to keep circling back to the same several words: Amazing. Strange. Alien.
 
“They’re how I imagine games from far in the future,” Shi Yue, a top Go player from China, has told the press. A Go enthusiast named Jonathan Hop who’s been reviewing the games on YouTube calls the AlphaGo-versus-AlphaGo face-offs “Go from an alternate dimension.” From all accounts, one gets the sense that an alien civilization has dropped a cryptic guidebook in our midst: a manual that’s brilliant—or at least, the parts of it we can understand.
 
[...]
 

Some moves AlphaGo likes to make against its clone are downright incomprehensible, even to the world’s best players. (These tend to happen early on in the games—probably because that phase is already mysterious, being farthest away from any final game outcome.) One opening move in Game One has many players stumped. Says Redmond, “I think a natural reaction (and the reaction I’m mostly seeing) is that they just sort of give up, and sort of throw their hands up in the opening. Because it’s so hard to try to attach a story about what AlphaGo is doing. You have to be ready to deny a lot of the things that we’ve believed and that have worked for us.”

 

Like others, Redmond notes that the games somehow feel “alien.” “There’s some inhuman element in the way AlphaGo plays,” he says, “which makes it very difficult for us to just even sort of get into the game.”
 

Ke Jie, the Chinese Go master who was defeated by AlphaGo earlier this year, said:

Last year, it was still quite humanlike when it played. But this year, it became like a god of Go.”
 

After his defeat, Ke posted what might be the most poetic and bracing quote of 2017 on Weibo (I first saw it in the WSJ):

“I would go as far as to say not a single human has touched the edge of the truth of Go.”
 

***

When Josh Brown died in his Tesla after driving under a semi, it kicked off a months long investigation into who was at fault. Ultimately, the NHTSA absolved Autopilot of blame. The driver was said to have had 7 seconds to see the semi and apply the brakes but was suspected of watching a movie while the car was in Autopilot.

In this instance, it appeared enough evidence could be gathered to make such a determination. In the future, diagnosing why Autopilot or other self-driving algorithms made certain choices will likely only become more and more challenging as the algorithms rise in complexity.

At times, when I have my Tesla in Autopilot mode, the car will do something bizarre and I'll take over. For example, if I drive to work out of San Francisco, I have to exit left and merge onto the 101 using a ramp that arcs to the left almost 90 degrees. There are two lanes on that ramp, but even if I start in the far left lane and am following a car in front of me my car always seems to try to slide over to the right lane.

Why does it do that? My only mental model is the one I know, which is my own method for driving. I look at the road, look for lane markings and other cars, and turn a steering wheel to stay in a safe zone in my lane. But thinking that my car drives using that exact process says more about my limited imagination than anything else because Autopilot doesn't drive the way humans do. This becomes evident when you look at videos showing how a self-driving car "sees" the road.

When I worked at Flipboard, we moved to a home feed that tried to select articles for users based on machine learning. That algorithm continued to be to tweaked and evolved over time, trying to optimize for engagement. Some of that tweaking was done by humans, but a lot of it was done by ML.

At times, people would ask why a certain article had been selected for them? Was it because they had once read a piece on astronomy? Dwelled for a few seconds on a headline about NASA? By that point, the algorithm was so complex it was impossible to really offer an explanation that made intuitive sense to a human, there were so many features and interactions in play.

As more of the world comes to rely on artificial intelligence, and as AI makes great advances, we will walk to the edge of a chasm of comprehension. We've long thought that artificial intelligence might surpass us eventually by thinking like us, but better. But the more likely scenario, as recent developments have shown us, is that the most powerful AI may not think like us at all, and we, with our human brains, may never understand how they think. Like an ant that cannot understand a bit about what the human towering above them is thinking, we will gaze into our AI in blank incomprehension. We will gaze into the void. The limit to our ability to comprehend another intelligence is our ability to describe its workings, and that asymptote is drawn by the limits of our brain, which largely analogizes all forms of intelligence to itself in a form of unwitting intellectual narcissism.

This is part of the general trend of increasing abstraction that marks modern life, but it is different than not knowing how a laptop is made, or how to sew a shirt for oneself. We take solace in knowing that someone out there can. To admit that it's not clear to any human alive how an AI made a particular decision feels less like a ¯\_(ツ)_/¯ and more like the end of some innocence.

I suspect we'll continue to tolerate that level of abstraction when technology functions as we want it to, but we'll bang our heads in frustration when it doesn't. Like the annoyance we feel when we reach the limits of our ability to answer a young child who keeps asking us "Why?" in recursive succession, this frustration will cut deep because it will be indistinguishable from humiliation.

Evaluating mobile map designs

I saw a few links to this recent comparison by Justin O'Beirne of the designs of Apple Maps vs. Google Maps. In it was a link to previous comparisons he made about a year ago. If you're into maps and design, it's a fairly quick read with a lot of useful time series screenshots from both applications to serve as reference points for those who don't open both apps regularly.

However, the entire evaluation seems to come from a perspective at odds with how the apps are actually used. O'Beirne's focus is on evaluating these applications from a cartographic standpoint, almost as if they're successors to old wall-hanging maps or giant road atlases like the ones my dad used to plot out our family road trips when we weren't wealthy enough to fly around the U.S. 

The entire analysis is of how the maps look when the user hasn't entered any destination to navigate to (what I'll just refer to as the default map mode). Since most people use these apps as real-time navigation aids, especially while driving, the views O'Beirne dissects feel like edge cases (that's my hypothesis, of course; if someone out there who has actual data on % of time these apps are used for navigation versus not, I'd love to hear it, even if it's just directional to help frame the magnitude).

For example, much of O'Beirne's ink is spent on each application's road labels, often at really zoomed out levels of the map. I can't remember the last time I looked at any mobile mapping app at the eighth level of zoom, I've probably only spent a few minutes of my life in total in all of these apps at that level of the geographic hierarchy, and only to answer a trivia question or when visiting some region of the world on vacation.

What would be of greater utility to me, and what I've yet to find, is a design comparison of all the major mapping apps as navigation aids, a dissection of the UX in what I'll call their navigation modes. Such an analysis would be even more useful if it included Waze, which doesn't have the market share of Apple or Google Maps but which is popular among a certain set of drivers for its unique approach to evaluating traffic, among other things.

Such a comparison should analyze the visual comprehensibility of each app in navigation mode, which is very different from their default map views. How are roads depicted, what landmarks are shown, how clear is the selected path when seen only in the occasional sidelong glance while driving, which is about as much visual engagement as a user can offer if operating a 3,500 pound vehicle. How does the app balance textual information with the visualization of the roads ahead, and what other POI's or real world objects are shown? Waze, for example, shows me other Waze users in different forms depending on how many miles they've driven in the app and which visual avatars they've chosen.

Of course, the quality of the actual route would be paramount. It's difficult for a single driver to do A/B comparisons, but I still hope that someday someone will start running regular tests in which different cars, equipped with multiple phones, each logged into different apps, try to navigate to the same destination simultaneously. Over time, at some level of scale, such comparison data would be more instructive than the small sample size of the occasional self-reported anecdote.

[In the future, when we have large fleets of self-driving cars, they may produce insights that only large sample sizes can validate, like UPS's "our drivers save time by never turning left." I'd love if Google Maps, Apple Maps, or Waze published some of what they've learned about driving given their massive data sets, a la OKCupid, but most of what they've published publicly leans towards marketing drivel.]

Any analysis of navigation apps should also consider the voice prompts: how often does the map speak to you, how far in advance of the next turn are you notified, how clear are the instructions? What's the signal to noise? What are the default wording choices? Syntax? What voice options are offered? Both male and female voices? What accents?

Ultimately, what matters is getting to your destination in the safest, most efficient manner, but understanding how the applications' interfaces, underlying data, and algorithms influence them would be of value to so many people who now rely on these apps every single day to get from point A to B. I'm looking for a Wirecutter-like battle of the navigation apps, may the best system win.

    The other explicit choice O'Beirne makes is noted in a footnote:

    We’re only looking at the default maps. (No personalization.)
     

    It is, of course, difficult to evaluate personalization of a mapping app since you can generally only see how each map is personalized for yourself. However, much of the value of Google Maps lies in its personalization, or what I suspect is personalization. Given where we are in the evolution of many products and services, analyzing them in their non-personalized states is to disregard their chief modality.

    When I use Google Maps in Manhattan, for example, I notice that that the only points of interest (POI's) the map shows me at various levels of zoom seem to be places I've searched for most frequently (this is in the logged in state, which is how I always use the app). Given Google's reputation for being a world leader in crunching large data sets, it would be surprising if they weren't selecting POI labels, even for non-personalized versions of their maps, based on what people tend to search for most frequently.

    In the old days, if you were making a map to be hung on the wall, or for a paper map or road atlas, what you chose as POI's would be fixed until the next edition of that map. You'd probably choose what felt like the most significant POI's based on reputation, ones that likely wouldn't be gone before the next update. Eiffel Tower? Sure. Some local coffee shop? Might be a Starbucks in three months, best leave that label off.

    Now, maps can be updated dynamically. There will always be those who find any level of personalization creepy, and some are, but I also find the lack of personalization to be immensely frustrating in some services. That I search for reservations in SF on Open Table and receive several hundred hits every time, sorted in who knows what order, instead of results that cluster my favorite or most frequently booked restaurants at the top, drives me batty.

    When driving, personalization is even more valuable because it's often inconvenient or impossible to type or interact with the device for safety reasons. It's a great time saver to have Waze guess where I'm headed automatically ("Are you driving to work?" it asks me every weekday morning), and someday I just want to be able to say "give me directions to my sister's" and have it know where I'm headed.

    My quick first person assessment, despite the small sample size caveats noted earlier:

    • I know that Apple Maps, as the default on iOS, has the market share lead on iPhone by a healthy margin. Still, I'll never get past the time the app took me off to a dead end while I was on the way to a wedding, and I've not used it since except to glance at the design. It may have the most visually pleasing navigation mode aesthetic, but I don't trust their directions at the tails. Some products are judged not on their mean outcome but their handling of the tails. For me, navigation is one of those.
    • It's not clear if Apple Maps should have a data edge over Google Maps and Waze (Google bought Waze but has kept the app separate). Most drivers use it on the iPhone because it's the default, but Google got a headstart in this space and also has a fleet of vehicles on the road taking Google street photos. Eventually, Google may augment that fleet with self-driving cars.
    • I trust Google Maps directions more than those of Apple Maps. However, I miss the usability of the first version of Google Maps, which came out on iOS way back with the first iPhone. I'd heard rumors Apple built that app for Google, but I'm not sure if that's true. The current flat design of Google Maps often strands me in a state in which I have no idea how to initiate navigation. I'd like to believe I'm a fairly sophisticated user and yet I sometimes sit there swiping and tapping in Google Maps like an idiot, trying to get it to start reading turn by turn directions. Drives me batty.

    I use Waze the most when driving in the Bay Area or wherever I trust that there are enough other drivers using Waze that it will offer the quickest route to my destination. That seems true in most major metropolitans. I can tell a lot of users in San Francisco use Waze because sometimes, when I have to drive home to the city from the Peninsula, I find myself in a line of cars exiting the highway and navigating through some random neighborhood side street, one that no one would visit unless guided by an algorithmic deity. 

    I use Waze with my phone mounted to one of those phone clamps that holds the phone at eye level above my dashboard because the default Tesla navigation map is still on Google Maps and is notoriously oblivious to traffic when selecting a route and estimating an arrival time. Since I use Waze more than any other navigation app, I have more specific critiques.

    • One reason I use Waze is that it seems the quickest to respond to temporary buildups of traffic. I suspect it's because the UI has a dedicated, always visible button for reporting such traffic. Since I'm almost always the driver, I have no idea how people are able to do such reporting, but either a lot of passengers are doing the work or lots of drivers able to do so while their car is stuck in gridlock. The other alternative, that drivers are filing such reports while their cars are in motion, is frightening.
    • I don't understand the other social networking aspects of Waze. They're an utter distraction. I'm not immune to the intrinsic rewards of gamification, but in the driving context, where I can't really do much more than glance at my phone, it's all just noise. I don't feel a connection to the other random Waze drivers I see from time to time in the app, all of which are depicted as various pastel-hued cartoon sperm. In wider views of the map, all the various car avatars just add a lot of visual noise.
    • I wish I could turn off some of the extraneous voice alerts, like "Car stopped on the side of the road ahead." I'm almost always listening to a podcast in the background when driving, and the constant interruptions annoy me. There's nothing I can do about a car on the side of the road, I wish I could customize which alerts I had to hear.
    • The ads that drop down and cover almost half the screen are not just annoying but dangerous as I have to glance over and then swipe them off the screen. That, in and of itself, is disqualifying. But beyond that, even while respecting the need for companies to make money, I can't imagine these ads generate a lot of revenue. I've never looked at one. If the ads are annoying, the occasional survey asking me which ads/brands I've seen on Waze are doubly so. With Google's deep pockets behind Waze, there must be a way to limit ads to those moments where they're safe or clearly requested, for example when a user is researching where to get gas or a bit to eat. When a driver has hands on the wheel and is guiding a giant mass of metal at high velocity, no cognitive resources should be diverted to remembering what brands you recall seeing on the app.
    • Waze still doesn't understand how to penalize unprotected left turns, which are almost completely unusable in Los Angeles at any volume of traffic. At rush hour it's a fatal failure, like being ambushed by a video game foe that can kill you with one shot with no advance warning. As long as it remains unfixed, I use Google Maps when in LA. I can understand why knowledge sharing between the two companies may be limited by geographic separation despite being part of the same umbrella company, but that the apps don't borrow more basic lessons from other seems a shame.
    • I use Bluetooth to listen to podcasts on Overcast when driving, and since I downloaded iOS 11, that connection has been very flaky. Also, if I don't have the podcast on and Waze gives me an voice cue, the podcast starts playing. I've tried quitting Overcast, and the podcast still starts playing every time Waze speaks to me. I had reached a good place in that Overcast would pause while Waze spoke so they wouldn't overlap, but since iOS 11 even that works inconsistently. This is just one of the bugs that iOS 11 has unleashed upon my phone, I really regret upgrading.

    Things I learned from The Defiant Ones

    Despite believing myself fairly in tune with the pop culture scene, I missed a lot of promotion for The Defiant Ones until I started seeing recommendations on social media from folks who'd watched it. I finally blitzed through the four episodes recently, and it's kind of a banger.

    I typically don't love documentaries which comprise so many talking head interviews because it feels like the default Powerpoint template of documentary filmmaking. But Iovine and Dre and all the other musicians are such compelling, scene-filling personalities that it's a treat, and often a lark, to see them play to the camera. Allen and Albert Hughes interview all the principals individually, but as with all oral histories, they ask all of them about the same events so they can use shots from one interviews as a reaction shot to a shot from someone else's interview. Or as a reaction shot to historical footage, like Puff Daddy recalling his reaction to Suge Knight's acceptance speech at the 1995 Source Music Awards.

    In part, I was an easy mark because so much of that is the music of my youth. I was an intern at Procter and Gamble, living with a bunch of the other interns in a corporate apartment, the summer The Chronic came out. My roommates and I listened to that album just about every day, on loop, for no other reason than to mainline its hooks.

    The Defiant Ones is also fascinating as a case study of two immensely successful people, Jimmy Iovine and Dr. Dre. It is dangerous to draw too many conclusions from a documentary like this. Survivor and selection bias influence the narrative, and two people does not a large sample make. The mere act of narrative construction is a con game, and always will be, even when it isn't hagiography, which first person narratives like this always veer towards. So take the following with a Himalayan salt block, because I do.

    And yet...

    If I lump the stories in this documentary with what I know of other successful people, a few things stood out to me. Call this a Malcolm Gladwellian attempt at teasing out a few lessons from anecdotal evidence.

    The first is that people who are really good at what they do stand out from others by not only recognizing when something is exceptional immediately but articulating why it is so, especially when no one else believes it is. Designers experience this when they show a design to someone else, maybe a peer, maybe an executive, and that audience member immediately notices something the creator is particularly proud of. Stories of Steve Jobs moments like this abound, which is why everyone who has met Jobs even once seems to speak of it as some mystical experience.

    Filmmakers all have stories of screening cuts for others and having the sharpest among the notice a particular bit of directorial intent, maybe something in the choreography of the actors, or the camera's movement, or even something in the sound design, that no one else picks up on.

    Whether that pattern recognition is innate or trained over many years, and likely both, we see it again and again in The Defiant Ones. It's Jimmy Iovine cribbing "Stop Draggin' My Heart Around" from Tom Petty for Stevie Nicks. It's Iovine hearing Trent Reznor and fighting tooth and nail to grab Nine Inch Nails for Interscope from TVT Records. Or Iovine meeting Gwen Stefani and telling her she'd be a star in six years, and having No Doubt release Tragic Kingdom exactly six years after that conversation.

    [Remember my caveats up front? Steve Gottlieb of TVT records disputes the way the Nine Inch Nails story is framed in the documentary. I certainly don't think Iovine and Dre are the only ones in the music industry who possess this skill, but this documentary is their story so I'll roll with these examples for those who have or will watch the documentary.]

    The most memorable aha moment in the documentary, for me, is when Dre hears one bit from a demo tape from among hundreds of demo tapes stacked in Iovine's garage. 

    "Back in those days, I didn't have an artist to work with. I'd go to Jimmy's house, and we'd have listening sessions. He was trying to help me figure out where I was going to go with my music. He'd take me down to his garage. There was cassette tapes everywhere. And I remember him picking up this cassette tape. He pops it in. I was like 'What the fuck, and who the fuck is that?!"
     

    Who he was was an unknown white rapper from Detroit. In the documentary, in the recreation of that seminal moment, the label on the cassette tape reads Slim Shady. I'm not sure if that's actually true to history, but it's remarkable both ways. In one, it's a wonderful bit of historical trivia, in another, it's a laughably on the nose historical recreation.

    Again, we have this pattern, the flash of recognition, picking out this tape from all the demo tapes, and hearing what no one else heard. With things like music, or even food, the articulation of excellence isn't as critical as the recognition. As in the excerpt above from Dre's memory of that moment, it was probably just a series of expletives, perhaps a literal WTF as he recalls.

    The moment where Dre recognizes the kid's talent isn't online, but this clip from Eminem and Dre's first meeting is, and it's amazing because it contains footage of the end from their first session in the studio. The tail end of this clip reveals what happened when Dre started playing a few beats he was working on for Eminem, and it's gobsmacking because so rarely is the moment of creative conception captured on video. See for yourself.

    Eminem and Dr. Dre tell the story of when they first met and went into the studio together. Stream all episodes of The Defiant Ones now. Follow The Defiant

    "Like yo. Stop. Shit's hot. That's what happened our first day, in the first few minutes of us being in the studio," remembers Dre.

    Because Eminem was a scrawny white rapper from Detroit, many resisted. He didn't look the part. That brings up the second lesson.

    "My gut told me Eminem was the artist that I'm supposed to be working with right now," Dre recalls. "But, I didn't know how many racists I had around me."

    "Everybody around me, the so-called execs and what have you, were all against it. The records I had done at the time, they didn't work, they wanted me out the building. And I come up with Eminem, this white boy."

    As in many moments in their long collaboration, Iovine and Dre persisted and profited yet again by arbitraging the biases of the herd.

    "We weren't looking for a white, controversial rapper," Iovine says. "We were looking for great."
     
    "Great can come from anywhere."
     

    He means it.

    "Lady Gaga walked into my office, Italian girl with brown hair, started telling me about Andy Warhol, and dance music, but yet industrial, and paintings. I don't know, she confused me so much that I signed her."
     

    None of the other Interscope execs thought Gaga had breakout appeal. Iovine did.

    "I was at a club with Timbaland, and I saw the room move. It felt like pop music. It felt like it could break through."
     

    Perhaps not a snap judgment, but no one would confuse Lady Gaga for Eminem. When Iovine says great can come from anywhere, his diverse roster of artists backs him up.

    How do you find alpha in an otherwise efficient market? Iovine and Dre arbitraged the biases of the market, of which one is rampant pattern recognition.

    Much of this makes it sound as if identifying hit music is Iovine and Dre's talent. But plenty of evidence exists that much of cultural taste is socially constructed and is subject to path dependence.

    The truth, as always, is somewhere in the middle. However, most people underestimate how much It is possible to socially hack popularity since some of popularity is a social construction and has nothing to do with any inherent quality of the goods being sold. This is a third lesson which the documentary reinforces.

    Derek Thompson's Hit Makers, which I will write up soon, and Michael Mauboussin's The Success Equation, both of which I loved, both cite Duncan Watts and Matthew Salganik's MusicLab experiments. The key finding of that study was that people rank some things higher simply because they were given randomly generated hints that those things were already popular with other people.

    This Kevin Simler post offering an alternative explanation for how ads work is actually how many people who work in advertising understand ads to work, at least in part. Simler theorizes that ads create common knowledge, and much as Watts and Salganik's experiment reveals, so much of human behavior is socially constructed. In the case of the MusicLab study, it's popularity. In Simler's examples, ads cue consumers on which products are likely to be the most effective signals in a world which status is socially constructed in large part through such consumer product totems.

    Iovine understands this, and nowhere is it more evident than in the latter part of the final episode of Defiant Ones, the Beats Headphones saga. 

    Dre spends some lots of time engineering Beats headphones for a particular sound. More on that later. I'm a moderate headphone geek; enough so that I own more than four pairs of over ear headphones (I prefer the sound of some for specific types of music) and two headphone amps, so I appreciate what Dre understands, which is that the personality of headphones can be a matter of personal taste.

    Iovine cuts to what's far more important in the headphone decision. Most people don't give a hoot what the response curves of a headphone are measured at, what they sound like. People wear them as fashion accessories, and people want to be cool.

    Iovine and Dre set up a day where they test all the leading headphones on the market. They're not impressed.

    "We realized that all headphones sound boring and looked like medical equipment. We wanted more bass in these headphones to exaggerate all of it. We wanted to put it on steroids," Iovine said.
     
    Producer Jon Landau recalls: "The Bose headphones, they were advertising noise canceling, total quiet. Jimmy says, 'Noise canceling?! Yeah, they're the headphones if you want to go to sleep on a plane. Our headphones are the where's the party headphones.'"
     

    The distribution and marketing leverage was to be found through Iovine's celebrity friendships, so he starts smiling and dialing, or perhaps more appropriately in Iovine's case, dialing and cajoling. He gives those headphones away to all his artists and asks them to wear them in their music videos, in public, anywhere a camera or a human eye is present. Anyone famous walking in Iovine's office has to don a pair of the headphones and submit to a photo. The design of the Beats headphones, like the iconic white headphones for the iPods, is brilliant. The iconic b imprinted on each colorful molded plastic ear cup is like a walking billboard.

    After artists, Iovine moves onto athletes, and soon it's rare to see Lebron benching in any of his workout videos on Instagram without his Beats by Dre headphones. I almost can't picture Ronda Rousey walking into the ring or out of the ring without picturing her with her Beats headphones draped around her neck. Who can forget Michael Phelps staring down Chad Le Clos in the 2016 Olympics, his Beats headphones blasting what must surely be some angry heavy metal that would ripple the surface of the Olympic pool.

    All PR isn't good PR, but when the sports leagues like FIFA and the NFL and the Olympics issue bans on the Beats headphones, it's a dream come true for a product seeking renegade cachet.

    It works. Any self-respecting audiophile considers Beats to be an absolute scam from a sound quality perspective and yet Beats dominates the premium headphone ($99 or greater) market.

    Not every product market sees market share driven by socially constructed popularity, but headphones are perhaps the perfect fashion accessory and cultural signal in the age where everyone can listen to music through their smartphone at any time.

    Iovine pushes the headphones so much that Eminem admits it annoyed him.

    "There would be times where we would be shooting a video until like six in the morning, and we had to do one more take with me or somebody in the video wearing some goddamn [Beats] headphones. Are you fucking kidding me?!"
     

    Iovine is a great producer, but he's also a consummate marketer.

    "The only person that does it better than him is me," says Puff Daddy.
     

    There may be a line which is shameful to cross when it comes to marketing, but who knows where that line is if you have no shame.

    "He's got good instinct, and he's shameless," says Trent Reznor about Iovine.
     

    In fairness to the documentary, Dre does talk a lot about tuning the sounds of the Beats headphones, so why do audiophiles dislike the sound? Beats are notoriously bass heavy. Dre grew up listening to music in cars in LA, with subwoofers so heavy that people outside the car can feel their organs being jostled.

    Music, especially for young people, is raw emotion and energy. Not that audiophiles don't also love to turn up their music, but the bass-heavy sound Dre and Iovine amplifies the primal elements of the music, something that non-audiophiles can feel. In a revealing scene, Dre demos the mix of an album by taking Iovine to a garage to listen to the album in a tricked out van. Dre knows that the music of the street is often heard, literally, on the streets, coming through some car stereo, bass pumping, car rocking. Dre isn't above understanding the social transmission of music, it's just that he understands a particular form of that virality, when it comes through the original social network, the streets of the neighborhood. If it weren't likely to render its listeners deaf, Dre would probably want his headphones to sound like those cars which wake the neighborhood, the bass so powerful that the subwoofers seem to shake windows and cause a car to bounce up and down.

    The last bit, which is a meta point, and one I've been thinking about a lot recently, is how many more entrepreneurs The Defiant Ones will reach and teach than any single book on entrepreneurship. Video may be a lossy medium in terms of how much it leaves out in service of the narrative structure, but its inherent visual and "autoplay" quality are proven to be much lower friction as an educational medium than text. We need more like this and less like the typical MOOC video which replicates all the excitement of your median classroom lecture.

    When you come to the 2^100 forks in the road...

    In some simple games, it is easy to spot Nash equilibria. For example, if I prefer Chinese food and you prefer Italian, but our strongest preference is to dine together, two obvious equilibria are for both of us to go to the Chinese restaurant or both of us to go to the Italian restaurant. Even if we start out knowing only our own preferences and we can’t communicate our strategies before the game, it won’t take too many rounds of missed connections and solitary dinners before we thoroughly understand each other’s preferences and, hopefully, find our way to one or the other equilibrium.
     
    But imagine if the dinner plans involved 100 people, each of whom has decided preferences about which others he would like to dine with, and none of whom knows anyone else’s preferences. Nash proved in 1950 that even large, complicated games like this one do always have an equilibrium (at least, if the concept of a strategy is broadened to allow random choices, such as you choosing the Chinese restaurant with 60 percent probability). But Nash — who died in a car crash in 2015 — gave no recipe for how to calculate such an equilibrium.
     
    By diving into the nitty-gritty of Nash’s proof, Babichenko and Rubinstein were able to show that in general, there’s no guaranteed method for players to find even an approximate Nash equilibrium unless they tell each other virtually everything about their respective preferences. And as the number of players in a game grows, the amount of time required for all this communication quickly becomes prohibitive.
     
    For example, in the 100-player restaurant game, there are 2&100 ways the game could play out, and hence 2^100 preferences each player has to share. By comparison, the number of seconds that have elapsed since the Big Bang is only about 2^59.
     

    Interesting summary of a paper published last year that finds that for many games, there is not clear path to even an approximate Nash equilibrium. I don't know whether this is depressing or appropriate to the state of the world right now, it's probably both. Also, it's great to have mathematical confirmation of the impossibility of choosing where to eat when with a large group.

    Regret is a fascinating emotion. Jeff Bezos' story of leaving D.E. Shaw to start Amazon based on a regret minimization framework is now an iconic entrepreneurial myth, and in most contexts people frame regret the same way, as something to be minimized. That is, regret as a negative.

    In the Bezos example, regret was a valuable constant to help him come to an optimal decision at a critical fork in his life. Is this its primary evolutionary purpose? Is regret only valuable when we feel its suffocating grip on the human heart so we avoid it in the future? As a decision-making feedback mechanism?

    I commonly hear that people regret the things they didn't do more than the things they do. Is that true? Even in this day and age where one indiscretion can ruin a person for life?

    In storytelling, regret serves two common narrative functions. One is as the corrosive element which reduces a character, over a lifetime of exposure, to an embittered, cynical drag on those around them. The second is as the catalyst for the protagonist to make a critical life change, of which the Bezos decision is an instance of the win-win variety.

    I've seen regret in both guises, and while we valorize regret as life-changing, I suspect the volume of regret that chips away at people's souls outweighs the instances where it changes their lives for the better, even as I have no way of quantifying that. Regardless, I have no contrarian take on minimizing regret for those who suffer from it.

    In that sense, this finding on the near impossibility of achieving a Nash equilibrium in complex scenarios offers some comfort. What is life or, perhaps more accurately, how we perceive our own lives but as a series of decisions, compounded across time.

    We do a great job of coming up with analogies for how complex and varied the decision tree is ahead of us. The number of permutations of how a game of chess or Go might be played is greater than the number of atoms in the universe, we tell people. But we should do a better job of turning that same analogy backwards in time. If you then factor in the impact of other people in all those forks in the road, across a lifetime, what we see is just as dense a decision tree behind us ahead of us. At any point in time, we are at a node on a tree with so many branches behind it that it exceeds our mind's grasp. Not so many of those branches are so thick as to deserve the heavy burden of regret.

    One last tidbit from the piece which I wanted to highlight.

    But the two fields have very different mindsets, which can hamper interdisciplinary communication: Economists tend to look for simple models that capture the essence of a complex interaction, while theoretical computer scientists are often more interested in understanding what happens as the models grow increasingly complex. “I wish my colleagues in economics were more aware, more interested in what computer science is doing,” McLennan said.