Revisionist commentary

I don't know that I'm aware of enough entries in this category to even consider it one, but I'm a sucker for the union of political and film satire as embodied in alternate film commentaries.

I was reminded of it when seeing The People's History of Tattooine which was first one of those spontaneous, emergent forms of Twitter humor that always brightens that otherwise dystopic landscape.

What if Mos Eisley wasn’t really that wretched and it was just Obi Wan being racist again?
What do you mean, “these blaster marks are too precise to be made by Sand People?” Who talks like that?
also Sand People is not the preferred nomenclature.
They have a rich cultural history that’s led them to survive and thrive under spectacularly awful conditions.
Mos Eisley may not look like much but it’s a a bedroom community with decent schools and affordable housing.
You can just imagine Obi-Wan after years of being a Jedi on Coruscant being stuck in this place and just getting madder and madder.
yeah nobody cares that the blue milk is so much more artisanal on Coruscant
Obi-Wan only goes to Mos Eisley once every three months to get drunk and he basically becomes like Byron.

Years ago, I laughed at UNUSED AUDIO COMMENTARY BY HOWARD ZINN AND NOAM CHOMSKY, RECORDED SUMMER 2002 FOR THE FELLOWSHIP OF THE RING (PLATINUM SERIES EXTENDED EDITION) DVD, PART ONE (here is part 2, and here are all four of the parts of their commentary for Return of the King).

CHOMSKY: And here comes Bilbo Baggins. Now, this is, to my mind, where the story begins to reveal its deeper truths. In the books we learn that Saruman was spying on Gandalf for years. And he wondered why Gandalf was traveling so incessantly to the Shire. As Tolkien later establishes, the Shire’s surfeit of pipe-weed is one of the major reasons for Gandalf’s continued visits.
ZINN: You view the conflict as being primarily about pipe-weed, do you not?
CHOMSKY: Well, what we see here, in Hobbiton, farmers tilling crops. The thing to remember is that the crop they are tilling is, in fact, pipe-weed, an addictive drug transported and sold throughout Middle Earth for great profit.
ZINN: This is absolutely established in the books. Pipe-weed is something all the Hobbits abuse. Gandalf is smoking it constantly. You are correct when you point out that Middle Earth depends on pipe-weed in some crucial sense, but I think you may be overstating its importance. Clearly the war is not based only on the Shire’s pipe-weed. Rohan and Gondor’s unceasing hunger for war is a larger culprit, I would say.
CHOMSKY: But without the pipe-weed, Middle Earth would fall apart. Saruman is trying to break up Gandalf’s pipe-weed ring. He’s trying to divert it.
ZINN: Well, you know, it would be manifestly difficult to believe in magic rings unless everyone was high on pipe-weed. So it is in Gandalf’s interest to keep Middle Earth hooked.
CHOMSKY: How do you think these wizards build gigantic towers and mighty fortresses? Where do they get the money? Keep in mind that I do not especially regard anyone, Saruman included, as an agent for progressivism. But obviously the pipe-weed operation that exists is the dominant influence in Middle Earth. It’s not some ludicrous magical ring.

A bit more, because I can't help myself:

ZINN: Right. And here we receive our first glimpse of the supposedly dreadful Mordor, which actually looks like a fairly functioning place.
CHOMSKY: This type of city is most likely the best the Orcs can do if all they have are cliffs to grow on. It’s very impressive, in that sense.
ZINN: Especially considering the economic sanctions no doubt faced by Mordor. They must be dreadful. We see now that the Black Riders have been released, and they’re going after Frodo. The Black Riders. Of course they’re black. Everything evil is always black. And later Gandalf the Grey becomes Gandalf the White. Have you noticed that?
CHOMSKY: The most simplistic color symbolism.
ZINN: And the writing on the ring, we learn here, is Orcish — the so-called “black speech.” Orcish is evidently some spoliation of the language spoken in Rohan. This is what Tolkien says.

Somewhat related is this, The Passion of the Christ: Blooper Reel.

Christ, shackled to a stone, is being scourged by Roman soldiers. Blood runs down his gory back. His pain is palpable.
Jesus: [writhes in pain, hands shaking]
[Cell phone rings.]
Jesus: [hands shake furiously]
[Cell phone rings. Caviezel looks up, sheepish.]
Roman soldier: Jim? That you?
Jesus: Yeah.
[Cell phone rings.]
Soldier: Want me to get it?
Jesus: Yeah.
[Roman soldier gingerly reaches into Caviezel’s blood-soaked loincloth, pulls out phone and opens it, then holds the phone to Caviezel’s ear.]
Off Camera: [laughter]
Jesus: Hey, Mom.

Are there more in this genre? If so, please share!

Why Information Grows

It is hard for us humans to separate information from meaning because we cannot help interpreting messages. We infuse messages with meaning automatically, fooling ourselves to believe that the meaning of a message is carried in the message. But it is not. This is only an illusion. Meaning is derived from context and prior knowledge. Meaning is the interpretation that a knowledge agent, such as a human, gives to a message, but it is different from the physical order that carries the message, and different from the message itself. Meaning emerges when a message reaches a life-form or a machine with the ability to process information; it is not carried in the blots of ink, sound waves, beams of light, or electric pulses that transmit information.

From the book Why Information Grows by Cesar Hidalgo. I read this book a long ways back in 2017,  but it's of no less interest now.

And it is the arrow of complexity—the growth of information—that marks the history of our universe and species. Billions of years ago, soon after the Big Bang, our universe did not have the capacity to generate the order that made Boltzmann marvel and which we all take for granted. Since then, our universe has been marching toward disorder, as Boltzmann predicted, but it has also been busy producing pockets that concentrate enormous quantities of physical order, or information. Our planet is a chief example of such a pocket.

When one first encounters the second law of thermodynamics, it's easy to tumble into despair at the pointlessness of everything. With the universe fated to collapse into heat death eventually, what is the point of it all?

In this existential void, the presence of pockets of information and order can feel like symbols of rebellion, a raised fist spray painted on a fragment of wall that remains from a bombed-out building. In manifestations of order we see intent, in intent we interpret meaning, and in meaning we find comfort.

Information, when understood in its broad meaning as physical order, is what our economy produces. It is the only thing we produce, whether we are biological cells or manufacturing plants. This is because information is not restricted to messages. It is inherent in all the physical objects we produce: bicycles, buildings, streetlamps, blenders, hair dryers, shoes, chandeliers, harvesting machines, and underwear are all made of information. This is not because they are made of ideas but because they embody physical order. Our world is pregnant with information. It is not an amorphous soup of atoms, but a neatly organized collection of structures, shapes, colors, and correlations. Such ordered structures are the manifestations of information, even when these chunks of physical order lack any meaning.

There are plenty of books on information theory, and viewing the universe through the lens of information and computation is increasingly popular, but Hidalgo's book is more readable than most.

To battle disorder and allow information to grow, our universe has a few tricks up its sleeve. These tricks involve out-of-equilibrium systems, the accumulation of information in solids, and the ability of matter to compute.
It is the growth of information that unifies the emergence of life with the growth of economies, and the emergence of complexity with the origins of wealth.
In twenty-six minutes Iris traveled from the ancientness of her mother’s womb to the modernity of twenty-first-century society. Birth is, in essence, time travel.

Birth as time travel is one of those metaphors that, once heard, lodges in your mind like something you always knew. When Arnold Schwarzenegger time travels back from the future to the modern day in The Terminator, he arrives naked, like a newborn.

[It is unclear why a cyborg from the future speaks with a thick Austrian accent, one of the only mysteries I have always hoped would be explained in some throwaway expository joke. My guess is that the voice was a marketing Easter Egg, like celebrity voices in Waze, and someone forgot to flip the Terminator back to its factory default voice before sending it back in time.]

Humans are special animals when it comes to information, because unlike other species, we have developed an enormous ability to encode large volumes of information outside our bodies. Naively, we can think of this information as the information we encode in books, sheet music, audio recordings, and video. Yet for longer than we have been able to write we have been embodying information in artifacts or objects, from arrows to microwave ovens, from stone axes to the physical Internet. So our ability to produce chairs, computers, tablecloths, and wineglasses is a simple answer to the eternal question: what is the difference between us, humans, and all other species? The answer is that we are able to create physical instantiations of the objects we imagine, while other species are stuck with nature’s inventory.

Another reason humans wouldn't evolve on a gaseous planet like Jupiter, besides the fact that we'd just burn up, is that without any solids we'd have no way of encoding information to pass on to future generations. Therefore, any advanced civilization in the universe would, it would seem, live in physical conditions that allow for the formation of solids, but not solids that are too rigid.

The temperature band matters. We need solids that are malleable to encode richer sets of information. Add to that the ability to compute, which we see in all forms in our world, down to the cellular level, and suddenly you have life. There is logic to why we look for specific conditions in the universe as precursors for life, and it can be defined more broadly than just looking for water, which is a downstream condition. Further upstream we just want a planet with solids, in a particular band of temperatures.

Such conditions allow living creatures to record and pass along information to the next generation. When humans finally were able to do so, they in effect conquered time. No longer did the knowledge of one generation evaporate into the sinkhole of mortality.

The car’s dollar value evaporated in the crash not because the crash destroyed the atoms that made up the Bugatti but because the crash changed the way in which these were arranged. As the parts that made the Bugatti were pulled apart and twisted, the information that was embodied in the Bugatti was largely destroyed. This is another way of saying that the $2.5 million worth of value was stored not in the car’s atoms but in the way those atoms were arranged. That arrangement is information.
So the value of the Bugatti is connected to physical order, which is information, even though people still debate what information is. According to Claude Shannon, the father of information theory, information is a measure of the minimum volume of communication required to uniquely specify a message. That is, it’s the number of bits we need to communicate an arrangement, like the arrangement of atoms that made the Bugatti.
The group of Bugattis in perfect shape, however, is relatively small, meaning that in the set of all possible rearrangement of atoms—like people moving in a stadium—very few of these involve a Bugatti in perfect condition. The group of Bugatti wrecks, on the other hand, is a configuration with a higher multiplicity of states (higher entropy), and hence a configuration that embodies less information (even though each of these states requires more bits to be communicated). Yet the largest group of all, the one that is equivalent to people sitting randomly in the stadium, is the one describing Bugattis in their “natural” state. This is the state where iron is a mineral ore and aluminum is embedded in bauxite. The destruction of the Bugatti, therefore, is the destruction of information. The creation of the Bugatti, on the other hand, is the embodiment of information.

One can separate out the intrinsic value of an item, defined above as the rarity of the state of the configuration of that item, from the external value of an item, as defined by qualities such as symbolic or emotional ones, like nostalgia.

In Pulp Fiction, Bruce Willis risks life and limb to recover a watch given to him by his father. There's no evidence it's a particularly rare watch, he could likely buy another just like it, but its symbolic value to him is extrinsic to the item yet tethered to it the way a genie is trapped in a magic lantern (and that special meaning is conveyed in the now immortal speech by Christopher Walken).

Even the most rational people I know own something that's not physically rare but emotionally rich, a talisman or totem that they use to summon whatever power it holds, whether it be nostalgia or regret or some other enchantment known only to themselves.

What Shannon teaches us is that the amount of information that is embodied in a tweet is equal to the minimum number of yes-or-no questions that Brian needs to ask to guess Abby’s tweet with 100 percent accuracy. But how many questions is that?
Shannon’s theory tells us that we need 700 bits, or yes-or-no questions, to communicate a tweet written using a thirty-two-character alphabet. Shannon’s theory is also the basis of modern communication systems.

One mathematical reason for the rising usage of emoji in Twitter and other forms of online communication may be that it increases the amount of information that can be encoded in 140 (and now 280) characters.

You'll recall from earlier that the third of the three conditions that allow information to grow is the ability of matter to compute.

To illustrate the prebiotic nature of the ability of matter to process information, we need to consider a more fundamental system. Here is where the chemical systems that fascinated Prigogine come in handy. Consider a set of chemical reactions that takes a set of compounds {I} and transforms them into a set of outputs {O} via a set of intermediate compounds {M}. Now consider feeding this system with a steady flow of {I}. If the flow of {I} is small, then the system will settle into a steady state where the intermediate inputs {M} will be produced and consumed in such a way that their numbers do not fluctuate much. The system will reach a state of equilibrium. In most chemical systems, however, once we crank up the flow of {I} this equilibrium will become unstable, meaning that the steady state of the system will be replaced by two or more stable steady states that are different from the original state of equilibrium.13 When these new steady states emerge, the system will need to “choose” among them, meaning that it will have to move to one or the other, breaking the symmetry of the system and developing a history that is marked by those choices. If we crank up the inflow of the input compounds {I} even further, these new steady states will become unstable and additional new steady states will emerge. This multiplication of steady states can lead these chemical reactions to highly organized states, such as those exhibited by molecular clocks, which are chemical oscillators, compounds that change periodically from one type to another. But does such a simple chemical system have the ability to process information? Now consider that we can push the system to one of these steady states by changing the concentration of inputs {I}. Such a system will be “computing,” since it will be generating outputs that are conditional on the inputs it is ingesting. It would be a chemical transistor. In an awfully crude way this chemical system models a primitive metabolism. In an even cruder way, it is a model of a cell differentiating from one cell type to another—the cell types can be viewed abstractly as the dynamic steady states of these systems, as the complex systems biologist Stuart Kauffman suggested decades ago. Highly interacting out-of-equilibrium systems, whether they are trees reacting to the change of seasons or chemical systems processing information about the inputs they receive, teach us that matter can compute. These systems tell us that computation precedes the origins of life just as much as information does. The chemical changes encoded by these systems are modifying the information encoded in these chemical compounds, and therefore they represent a fundamental form of computation. Life is a consequence of the ability of matter to compute.

What's lovely about all of these conditions that allow information to grow is their seeming relevance to individuals and groups of individuals, like corporations or societies or markets.

Humans are concentrated bundles of information with compute power, and when we push ourselves out of equilibrium, we accumulate information. When we crank up our inputs and force ourselves out of our own equilibrium, as we do when we become students, we grow as we restore ourselves to steady state. Whenever anyone complains that they're in a rut, I always counsel them to force themselves out of equilibrium.


That covers much of the first half of the book, all fascinating. However, the part of the book that's of broader interest to a business audience is Hidalgo's discussion of the economy as a creator of information.

It's easiest to understand the information creation capacity of an economy by examining its outputs, and the simplest outputs to understand are physical products.

Thinking about products as crystals of imagination tells us that products do not just embody information but also imagination. This is information that we have generated through mental computations and then disembodied by creating an object that mimics the one we had in our head. Edible apples existed before we had a name for them, a price for them, or a market for them. They were present in the world. As a concept, apples were simply imported into our minds. On the other hand, iPhones and iPads are mental exports rather than imports, since they are products that were begotten in our minds before they became part of our world. So the main difference between apples and Apples resides in the source of their physical order rather than in their embodiment of physical order. Both products are packets of information, but only one of them is a crystal of imagination.

Like many navel gazers in the tech industry, I'm guilty of stereotyping companies. Apple's strength is integrated hardware and software, Google is the king of machine learning and crunching large data sets, Facebook is the social network to end all social networks, and Amazon is the everything platform.

However, if you haven't worked or been inside any of those companies, it's fairest to judge them as black boxes into which inputs disappear and come out as various outputs, usually products and services like gadgets or websites or applications. Everything else is a mild form of fan fiction. 

By analyzing a company's outputs, one can deduce a great deal about its capabilities. Hidalgo does the same but at the country level.

The idea of crystallized imagination tells us that a country’s export structure carries information about more than just its abundance of capital and labor. A country’s export structure is a fingerprint that tells us about the ability of people in that country to create tangible instantiations of imaginary objects, such as automobiles, espresso machines, subway cars, and motorcycles, and of course about the myriad of specific factors that are needed to create these sophisticated products. In fact, the composition of a country’s exports informs us about the knowledge and knowhow that are embodied in that country’s population.

A country that can export a product like an iPhone generally has greater generative power than one that can only export raw materials like bananas. The telltale clues to the economic potential of a country lie not in its imports but its exports.

So what has any of this to do with Chile? The only connection between Chile and the history of electricity comes from the fact that the Atacama Desert is full of copper atoms, which, just like most Chileans, were utterly unaware of the electric dreams that powered the passion of Faraday and Tesla. As the inventions that made these atoms valuable were created, Chile retained the right to hold many of these atoms hostage. Now Chile can make a living out of them. This brings us back to the narrative of exploitation we described earlier. The idea of crystallized imagination should make it clear that Chile is the one exploiting the imagination of Faraday, Tesla, and others, since it was the inventors’ imagination that endowed copper atoms with economic value. But Chile is not the only country that exploits foreign creativity this way. Oil producers like Venezuela and Russia exploit the imagination of Henry Ford, Rudolf Diesel, Gottlieb Daimler, Nicolas Carnot, James Watt, and James Joule by being involved in the commerce of a dark gelatinous goo that was virtually useless until combustion engines were invented. Making a strong distinction between the generation of value and the appropriation of monetary compensation helps us understand the difference between wealth and economic development. In fact, the world has many countries that are rich but still have underdeveloped economies. This is a distinction that we will explore in detail in Part IV. But making this distinction, which comes directly from the idea of crystallized imagination, helps us see that economic development is based not on the ability of a pocket of the economy to consume but on the ability of people to turn their dreams into reality. Economic development is not the ability to buy but the ability to make.

At a corporate level, I can recall an age when Sony was the king of consumer electronics the world over. I first coveted a Walkman, then later a Discman. Our family spent its formative years huddled around a giant (at the time) Sony Trinitron TV, and we were the envy of all my friends for owning one. I looked forward to any trip to Japan for a chance to walk the electronics districts to purchase the coolest gadgets on the planet, and for years I owned a Minidisc player model that you couldn't find in the U.S.

And then the world shifted, and the gadget which subsumed all other gadgets was the computer, and as it shrank in size while growing in computational power, the way we interacted with such devices increasingly became software-based. In that competition, the vector which mattered more than anything became software design, a skill Sony had not mastered.

The company that understood both software and hardware design better than any company in the world happened to be located in Silicon Valley, not Japan, and, after a long Wintel interregnum, caused by a number of business factors covered comprehensively elsewhere, Apple's unique skills found themselves in a universe they could really dent. And dent they did.

Thanks especially to the market opportunity created by the smartphone, which it seized with the iPhone, Apple not only surpassed Sony and moved the balance of power in consumer technology across the Pacific Ocean to American shores but became the most valuable company in the entire world.


Not all information is easily embodied. For example, for a while I puzzled over what I'll call the Din Tai Fung Paradox.

Din Tai Fung is a restaurant chain, and I visited the original outlet in Taipei decades ago with my mother. They're known for their Shanghainese soup dumplings, made with a very delicate wrap that somehow never breaks and dumps its precious cargo of pork broth until the moment at which you prod it with your chopsticks just so. Some will argue whether Ding Tai Fung is all that and a bucket of chicken, but at a minimum I find the menu to be satisfying comfort food done consistently, in a setting that is usually cleaner and more well-kept than your average chain restaurant outlet. You'll find superior deals from a street vendor and more elaborate preparation at a higher-end restaurant, but Din Tai Fung industrializes and scales a Chinese staple. We don't pay enough attention to scale.

The mystery is why Din Tai Fung has opened so few outlets; they've only dropped locations in a handful of cities in about ten countries in the world, and every Din Tai Fung is packed solid from open to close with the type of ever-present line of humans snaked outside the front door that you so rarely see at any restaurant, let alone a chain.

For a few months, a new outlet was rumored to be opening in San Francisco soon, and among my friends it was as momentous a rumor as if a new Star Wars teaser trailer had dropped. Ultimately, one opened in the Bay Area, but in Santa Clara instead of San Francisco. 

Which leads to a further mystery: why haven't any competing chains opened up to make the same items to fill the market void? I would never open a restaurant, but my family knows I'd make an exception if I were granted the opportunity to open a branch of Din Tai Fung anywhere. I bring it up every family gathering, when there's a lull in the conversation. Forget cryptocurrency, I want to mint me some Din Tai Fung coin.

At every Din Tai Fung I've been to, they have a glass window so you can look into the kitchen to see the soup dumplings being wrapped, always by kitchen staff wearing white uniforms, almost like lab assistants, an impression magnified by the branches that require face masks. It's rumored that the branches in Asia try to hire the tallest, most attractive men to man the soup dumpling assembly line, but it sounds about as true as a lot of things my aunts and uncles tell me, which is to say it's more credible than I'd care to admit.

The hermetic vibe behind the glass is as far from the vendor selling goods from a street cart as possible; some find street food charming but if you're taking this food to a global audience it needs to be sanitized or sterilized, the same way movies for the Chinese market strip out any storylines that might offend. It's not just the front of the house that's immaculate, the show behind the glass display says they have nothing to hide. It's the equivalent of the blackjack dealer at a casino clapping and turning their hands one way and the other before moving to the next table.

More interesting to me was that Din Tai Fung even doesn't even bother to hide the process behind its staple dish, the evidence is on thousands of smart phones by this point, everyone seems to stop to take a photo or video of the assembly line while waiting for their table.

And yet any Chinese food fan knows it's notoriously hard to find a good soup dumpling. In this age where recipes for almost anything are available online for free, why can't you find a good soup dumpling in most major cities in the world? Or, for that matter, a good burrito, or any dish you love? Why are these crystals of imagination so unevenly distributed when the recipes for making them so broadly available?

The answer, as any home chef who has tried to make a dish from some highfalutin cookbook knows, is simple: you can have the most precise ingredient list and directions and still struggle to make anything approaching what you ate, whether it came from a $400 tasting menu or a mainstream cookbook. Cooking is not nearly as deterministic as the term recipe implies.

Slight variations in environment, weather, ingredients, and cookware can lead to massive differences in the final product. Your oven may say 400 degrees, but the actual temperature inside, at the precise spot where you've placed your baking dish, may be different. That celery you use for your mirepoix today may not be as fresh as the celery you used last week. The air pressure where you're cooking on a particular day may differ from that where you live, the bacteria in the air may also vary. Great chefs appear on Top Chef and flail making dishes they've made hundreds of times in their own restaurant kitchen because every bit of environmental variation matters.

We may glamorize the image of the genius, heroic chef, working magic to create a delicious and beautifully plated dish that a waiter places before us with a balletic flourish, but the true value creation in a restaurant comes from translating that moment of genius into a rote, repeatable cycle. The popularity of sous vide as a cooking technique, even at high end restaurants, comes down its repeatable precision and accuracy. Ask any chef and they'll tell you the value of a line chef who can cook dozens of proteins to the right level of doneness every time given the high cost of fish and meat.

In addition to all those conditionals, much of cooking skill comes down to learned muscle memory and pattern recognition that can only be encoded in a human being through repeated trial and error. I tried to learn some of my favorite of my mother and grandmother's dishes by writing down recipes they dictated to me, but much was lost in translation. Like so much maternal magic, it could only be learned, truly, at their side, with an apron on, watching, imitating, botching one dish after another, until some of it seeped into my bones.

In a memorable segment from the documentary Jiro Dreams of Sushi, apprentice Daisuke Nakazawa is assigned the job of making egg sushi, or tamago. He believes it will be simple, but again and again, Jiro rejects his work. Nakazawa ends up making over 200 rejected samples until finally, one day, Jiro approves. Nakazawa cries in relief and joy.


Hardware and software are not like cooking. When knowledge and instructions can be encoded in bits, a level of precision is possible that is effectively, for the purposes of this discussion, deterministic. Manufacturing a hundred million iPhones is like food production, but not the type done in high end or home kitchens. Instead, it is more like producing a hundred million Oreos.

There is one country in the world where that many iPhones can be manufactured for the cost that allows Apple to reap its insane profits: China. I can't think of any other country in the world, not India or Mexico or the United States, or all of Europe together that can make that many iPhones for that price to meet the market demand year after year. Some countries have the labor but not the skills, others have the skills but not enough labor, and others just can't do the work as cheaply as China can.

Recall that the potential of an economy can be judged by the complexity of its exports. Based on that, it's difficult to imagine an economy outside of the U.S. with more potential than China. Some of the most complex products in the world, and the iPhone deserves to be on that list, are made in China.

I've backed many a Kickstarter hardware project, and without fail, every one has been made in China, usually Shenzhen. Inevitably, when the products are delayed, the project's creators will send an update with some photos of a few of them in China, at some plant, examining some part that will get the project back on track, or with their arms around a few Chinese plant managers giving a thumbs up sign.

Kickstarter often feels like an industrial and software design and marketing operation layer grafted on top of the manufacturing capabilities of Shenzhen. It is an early warning indicator of China's economic potential, and the gap that remains to realizing it.

Here is another. Foxconn assembles iPhones for Apple, and for their efforts they make anywhere from $8 to $30 per iPhone, depending on what article you believe. Whatever the figure, we know it is not far off from that.

Apple, in contrast, makes hundreds of dollars per iPhone. They earn that premium, many multiples what Foxconn earns, by virtue of being the ones who designed every aspect of the phone, from the software to the hardware. China can supply labor and even sometimes components, but the crystal of imagination that is the iPhone, perhaps the most valuable such crystal in the history of the world, comes almost entirely from the imagination of employees of Apple. Foxconn is one cog in a long supply chain, and that link isn't the one made of gold.

However, to even have the capability of making an iPhone for less than the cost of a lunch in San Francisco is a skill, one China has shown again and again. Many another country wishes it had such a demonstrated skill. Were China ever able to gain some of the software and industrial design skills of a company like Apple, they would be even more of an economic powerhouse than they are now.

That's a massive conditional. It's not something that can be learned by mere handwaving or even sheer industriousness. After all, Sony could return to its former glory, or Samsung be even more dominant globally, if software design skills were so easily learned.

Someone someday will write a book of the history of software design and how it came to be that Apple built up that capability more than any other technology company, and I'll be among its most eager readers because it's an untold story that holds the key to one of the greatest value creation stories in the history of business.

...our world is one in which knowledge and knowhow are “heavier” than the atoms we use to embody their practical uses. Information can be moved around easily in the products that contain it, whether these are objects, books, or webpages, but knowledge and knowhow are trapped in the bodies of people and the networks that these people form. Knowledge and knowhow are so “heavy” that when it comes to a simple product such as a cellphone battery, it is infinitely easier to bring the lithium atoms that lie dormant in the Atacama Desert to Korea than to bring the knowledge of lithium batteries that resides in Korean scientists to the bodies of the miners who populate the Atacaman cities of Antofagasta and Calama. Our world is marked by great international differences in countries’ ability to crystallize imagination. These differences emerge because countries differ in the knowledge and knowhow that are embodied in their populations, and because accumulating knowledge and knowhow in people is difficult. But why is it hard for us to accumulate the knowledge and knowhow we need to transform our dreams into reality?

If knowledge were so easy to transfer, I'd be a three-star Michelin Chef because someone gifted me a copy of the Eleven Madison Park cookbook.

Getting knowledge inside a human’s nervous system is not easy because learning is both experiential and social.5 To say that learning is social means that people learn from people: children learn from their parents and employees learn from their coworkers (I hope). The social nature of learning makes the accumulation of knowledge and knowhow geographically biased. People learn from people, and it is easier for people to learn from others who are experienced in the tasks they want to learn than from people with no relevant experience in that task. For instance, it is difficult to become an air traffic controller without learning the trade from other air traffic controllers, just as it is difficult to become a surgeon without having ever been an intern or a resident at a hospital. By the same token, it is hard to accumulate the knowhow needed to manufacture rubber tires or an electric circuit without interacting with people who have made tires or circuits.6 Ultimately, the experiential and social nature of learning not only limits the knowledge and knowhow that individuals can achieve but also biases the accumulation of knowledge and knowhow toward what is already available in the places where these individuals reside. This implies that the accumulation of knowledge and knowhow is geographically biased.

What governs the information production capacity of a country? Hidalgo coins two terms to analyze this problem. One is the personbyte.

We can simplify this discussion by defining the maximum amount of knowledge and knowhow that a human nervous system can accumulate as a fundamental unit of measurement. We call this unit a personbyte, and define it as the maximum knowledge and knowhow carrying capacity of a human.

The other term is firmbyte.

The limited proliferation of megafactories like the Rouge implies that there must be mechanisms that limit the size of the networks we call firms and make it preferable to disaggregate production into networks of firms. This also suggests the existence of a second quantization limit, which we will call the firmbyte. It is analogous to the personbyte, but instead of requiring the distribution of knowledge and knowhow among people, it requires them to be distributed among a network of firms.

Hidalgo then delves a bit into Coase's transaction cost theory of the firm. Traditionally, Coase's theory is used as a way to explain why firms are fundamentally limited in their size, the idea being that at some size, external transactions become cheaper than internal coordination costs and so it's more efficient to just transact externally rather than produce internally.

I'm not interested in examining that topic now. Instead, let's assume that firms all do have some asymptote in size beyond which Coase's anchor becomes too heavy. The interesting implication is that given the existence of a ceiling on the size of the firmbyte, if some chunk of knowledge exceeds that capacity then it can only be carried by a network of firms.

It's long been said that the center of the technology universe shifted from Boston's route 128 to Silicon Valley because California banned non-competes (here's one study). Hidalgo's theory of the finite compute ability of a network of humans and firms explains how this works. The free movement of employees in Silicon Valley allows the region's knowledge-carrying capacity to increase at the expense of any single firm's benefit. Per Coase, the cost of information movement in Silicon Valley, as embodied by an employee carrying a personbyte from one firm to the next, is lower than it was in the route 128 corridor.

Let's telescope back out to the country level. What applies at the regional or industry level holds at the country level. A country's knowledge carrying capacity, and thus its information production power, is influenced in part by the size of networks it can form.

In his 1995 book Trust, he [Francis Fukuyama] argues that the ability of a society to form large networks is largely a reflection of that society’s level of trust. Fukuyama makes a strong distinction between what he calls “familial” societies, like those of southern Europe and Latin America, and “high-trust” societies, like those of Germany, the United States, and Japan.
Familial societies are societies where people don’t trust strangers but do trust deeply the individuals in their own families (the Italian Mafia being a cartoon example of a familial society). In familial societies family networks are the dominant form of social organization where economic activity is embedded, and are therefore societies where businesses are more likely to be ventures among relatives. By contrast, in high-trust societies people don’t have a strong preference for trusting their kin and are more likely to develop firms that are professionally run. Familial societies and high-trust societies differ not only in the composition of the networks they form—as in kin and non-kin—but also in the size of the networks they can form. This is because the professionally run businesses that evolve in high-trust societies are more likely to result in networks of all sizes, including large ones. In contrast, familial societies are characterized by a large number of small businesses and a few dominant families controlling a few large conglomerates.
Yet, as we have argued before, the size of networks matters, since it helps determine the economic activities that take place in a location. Larger networks are needed to produce products of higher complexity and, in turn, for societies to achieve higher levels of prosperity. So according to Fukuyama, the presence of industries of different sizes indicates the presence of trust. In his own words: “Industrial structure tells an intriguing story about a country’s culture. Societies that have very strong families but relatively weak bonds of trust among people unrelated to one another will tend to be dominated by small, family-owned and managed business. On the other hand, countries that have vigorous private nonprofit organizations like schools, hospitals, churches, and charities, are also likely to develop strong private economic institutions that go beyond the family.”

In Tyler Cowen's conversation with economist Luigi Zingales, the latter hints at the limitations of familial economies in humorous fashion:

One friend of mine was saying that the demise of the Italian firm family structure is the demise of the Italian family. In essence, when you used to have seven kids, one out of seven in the family was smart. You could find him. You could transfer the business within the family with a little bit of meritocracy and selection.
When you’re down to one or two kids, the chance that one is an idiot is pretty large. The result is that you can’t really transfer the business within the family. The biggest problem of Italy is actually fertility, in my view, because we don’t have enough kids. If you don’t have enough kids, you don’t have enough people to transfer. You don’t have enough young people to be dynamic.
The Italian culture has a lot of defects, but the entrepreneurship culture was there, has been there, and it still is there, but we don’t have enough young people.

Low fertility's impact on economies is an issue globally, for example in Japan, but low trust outside of family is an even broader constraint on the knowledge carrying capacity of an economy. If you can't form as large a firm as another country, you can't compete in some businesses and the information producing capability of your economy has a lower ceiling.

If you run a company, you're no doubt familiar with the efficiency gains that arise when different employees and departments operate with high trust. Links form easily given an assumption of low risk, and knowledge moves more quickly, fluidly. Networks then facilitate trust in a virtuous cycle, an example being the military as an integrating institution in a multi-ethnic society.

Trust based on family has its own advantages, but for now I'm focused on an economy's ceiling, and networks that throw off the shackles of family-based firms can scale more. China not only has the population to supply a workforce that can assemble 100 million iPhones in a year, it has an economy that has moved beyond any roots in family-based trust.

Hidalgo's theory also explains why we don't see geographic leakage in industry know-how. Why aren't there Silicon Valleys everywhere?

The personbyte theory can also help us explain why large chunks of knowledge and knowhow are hard to accumulate and transfer, and why knowledge and knowhow are organized in the hierarchical pattern that is expressed in the nestedness of the industry-location data. This is because large chunks of knowledge and knowhow need large networks of people to be embodied in, and transferring or duplicating large networks is not as easy as transferring a small group of people. As a result, industry-location networks are nested, and countries move to products that are close by in the product space.

When the knowledge required to create something like an iPhone or a Hollywood film require the interaction of multiple people, with all their accumulated knowledge, seizing it for yourself isn't as easy as poaching one employee or sprinting off with a burning branch to give fire to mankind like Prometheus. Thus we understand why, besides its weather, LA has such a grip on filmmaking for the global market, why any handset manufacturers can't just reverse engineer an iPhone, and why, despite having hundreds of millions of users for iMessages, Apple isn't a credible threat in social networking.

When I study the Chinese tech market, I see an incredibly high ceiling. In fact, the Chinese consumer market in tech is more dynamic now than its counterpart in Silicon Valley. Once, China was belittled for simply copying all the US tech companies. It's true, there is a Chinese Bizarro instance of all successful U.S. tech companies, a Chinese Google, Facebook, Amazon, Twitter, Instagram, YouTube, and so on.

Thanks to that complex interaction of culture and technology, however, China now creates companies with no real American equivalents, and that extends beyond WeChat. China also has more dense cities than America, and density creates its own unique consumer technology opportunities. You'll run out of fingers and toes before you come to a Chinese city as small as New York City, and that matters when many social products piggyback and rely on metropolitan densities as dry kindling.

The competition between tech companies in the U.S. draws scandalized chatter from the peanut gallery, but the pace at which something like Snapchat Stories was copied in the U.S. would be seen as laughably slow in China. Not only are features of competitors routinely copied within a week or two in China, employees are poached all the time in what is closer to a true approximation of a free labor market than even Silicon Valley. Knowledge moves quickly, freely.

Three things, in my observation, hold Chinese companies back from capturing more share in the international market, outside Chinese borders. Two are related, and those are an internationally appealing industrial and software design aesthetic.

It's true, people who find many Chinese software UI's to be busy and crowded can't read Chinese and thus don't understand their localized appeal to the Chinese market. As eye-tracking studies have shown (example), Chinese users scan pages differently, and why shouldn't they considering the fundamental differences between an alphabetic language like English and a pictographic one like Chinese?

Still, most of the international market can't read Chinese. In my past work with UI designers in China, I find it takes more prompting to arrive at something more broadly intuitive for, say, an American market.

The same goes for industrial design, where, akin to the denser informational aesthetic of Chinese software, a somewhat more maximalist impulse takes hold. It's still quite common to walk into an Asian electronics superstore and see display signage that lists dozens of bullet points of features in selling a product. Contrast that to the almost non-existent signage in an Apple store for the extreme opposite.

A more tangible example is something like the user interface of everyone's favorite cooking gadget, the Instant Pot. I received one as a gift last year, and no doubt, I think it's a real value at $80 or so for the base model. For how harried we all feel, a pressure cooker is way more useful a kitchen gadget than most.

However, this is the instrument panel on the front of the Instant Pot.

In practice, it's even more confusing than it appears on first glance. I won't delve into it here, but given a simple design pass, the entire UI could be made much less intimidating, much more intuitive. Given the functionality of any pressure cooker, the functionality can be reduced to a much simpler instrumentation.

These two skill gaps in software and industrial design allow for a continued Kickstarter arbitrage opportunity that slaps a more internationally appealing software and industrial design, along with the more internationally appealing marketing, on top of Shenzhen's manufacturing capabilities.

The last thing holding back more Chinese startups, in my experience, is a shortage in the professional management class. I know, I know, MBA's get a bad rap in the domestic market, but having many CEO's with engineering background at so many Chinese tech companies comes with its own drawbacks.

This management gap may be related to the style of org structure and management which others have mentioned to me as less conducive to certain types of innovation, though it's harder for me to assess without having worked inside a Chinese company.

None of this needs to matter since the Chinese market is so massive. Chinese startups can succeed wildly without ever making a peep outside their home territory. Besides, how a design aesthetic and process can seep into a country's soul remains a mystery to me, but my guess is it's about as slow-moving as trying to produce a high quality soup dumpling in a new market.

Still, I love to muse on the potential of China. In fact, there is one Chinese company that best exemplifies the potential of the country's tech market on the international stage. Last summer, a friend of mine who had worked at this company heard I was in the market for a drone and referred me to a friend who was selling an extra, lightly used Mavic Pro which he'd purchased after he thought he'd lost his original.

I don't know the first thing about flying drones, but it took me all of fifteen minutes or so to get the thing up and flying around in the air, capturing 4K video. It is an fantastic feat of engineering, probably still the single drone I'd recommend to anyone looking to get into drone photography (though I recommend getting a bundle with some extra batteries and a carrying case).

DJI had a few advantages in surging to its undisputed leadership position in the global drone market. First, this is a product category where how well the product actually works is more complex a task than in others. Many drones just don't fly that well. Being an engineering-led company is a strength here, and as long as the industrial design is optimized for flight, it doesn't really matter if your product isn't the sleekest. You won't care what it looks like when it's several hundred feet up in the air.

Second, from a software design perspective, drone UI design can piggyback on flight UI templates that have been worked out over the years. One reason I was up and running so quickly with my Mavic Pro is that the flight sticks imitate video game flight controls. The UI isn't quite as simple as I'd like, but I was fluent much more quickly than the hefty page count of the instruction manual implied.

Estimates of DJI's market share vary, but they are all well north of 50%, and most of its competitors have either left the market entirely or are struggling to stay aloft, so to speak. Here is a vertically integrated Chinese company that most definitely makes more than the cost of cheap SF lunch that Foxconn makes for each iPhone it assembles.

Now, making drones and building smartphones or writing apps are not the same skills. Drones, as exciting as they are, still aren't the type of thing I'd recommend except to photography enthusiasts. And China is a long way from dominating the consumer electronics market internationally with a massive portfolio of products from domestic, vertically integrated companies.

But the ceiling at least exists. it's not theoretical. It's more than any other country outside the U.S. can say, and how China answers it is one of the questions which will determine the relative economic power of China versus the United States in this century.


That China can export drones more easily than it can import, say, the software and industrial design know-how of a company like Apple is, at a higher level, a fundamental question of how we can pass along knowledge of any sort. Why are we not better at transferring know-how to industrial workers who are out of a job, why hasn't the internet produced a global leveling of industrial know-how at the country level?

Hidalgo notes:

At a finer scale, economies still lack the intimate connection between knowhow and information that is embodied in DNA and which allows biological organisms to pack knowhow so tightly. A book on engineering, art, or music can help develop an engineer, artist, or musician, but it can hardly do so with the elegance, grace, and efficiency with which a giraffe embryo unpacks its DNA to build a giraffe. The ability of economies to pack and unpack knowhow by physically embodying instructions and templates as written information is much more limited than the equivalent ability in biological organisms.

We are nowhere near our maximum throughput for passing on our knowledge to our fellow man, let alone across the membrane between companies and economies. In The Matrix, with a few seconds of fluttering eyelids, Keanu Reeves downloads an entire martial art into his self.

Give a man a kung-fu, you making him Neo. Teach a man to kung-fu, you make him John Wick. 

That is the dream. Asky any parent who is in the midst of trying to get a three-year-old to eat their dinner without throwing half of it on the ground and they'll nod in agreement. What is our version of nature's DNA and cell school of knowledge compression and decompression?

One of the reality TV shows which I wish existed would be one in which a variety of masters in their field compete to take absolute novices from a standing start as far as possible in a finite period of time. Instead of Top Chef, in which contestants are all successful chefs already, I want three master chefs to have to each train a handful of complete cooking dunces over a several month process, and the teacher and winning student share the pot.

Each season could have variant skills. In another, maybe the world's top three piano teachers have to train people who've never played the piano in their lives to sight-reading Chopin. Bill Belichick and Nick Saban coach two youth league football teams to see which wins a season finale scrimmage. I'm sure some of you will write me to tell me some version of a show like this exists already, but I've seen some that come close, but almost all the ones I've seen spend much too little time on the actual instruction methodology and process, and that's where all the mystery and interest lies.

In future posts, I'll delve into some of the limitations I've observed in how we pass information among people, companies, and economies, and from one generation to the next. For now, I recommend picking up Hidalgo's book, and I hope to hear from you about some of the ways you've found to help grow information around you in more efficient ways.

My first podcast appearance

A few months ago David Perell emailed and asked if I'd like to be on his podcast, The North Star. He mentioned some of the other people he'd had on, so many of whom I admire, and I thought he had emailed the wrong person. But no, he had done his research and knew a lot about my preoccupations and obsessions.

David passed through San Francisco before this past holiday break on his way to Australia, and we chatted at my apartment for a night, much of which is captured in this podcast. I've had a small handful of invitations to be on podcasts, but it was always trickier when I was working given the varied concerns of corporate PR, and I always wanted to do my first podcast in person rather than remotely.

If you enjoy my work here at my site, perhaps you'd be game to sample me in another medium? We cover some of the topics I've covered here before, but we spend a bit more time on my personal history, anchoring those topics along my professional timeline. I had a lot of fun and hope to perhaps drop in on another podcast or two in 2018.

Beware the lessons of growing up Galapagos

In All the old rules about movie stardom are broken, part of Slate's 2017 Movie Club year end review, Amy Nicholson writes:

Lugging my $10 masterpiece back to the hotel, I thought about how most of the famous faces who represent the movies have been dead for 50 years. Marilyn’s smile sells shot glasses, clocks, calendars, posters, and shirts in stores from Sunset Boulevard to Buenos Aires, Tijuana to Taiwan. What modern actor could earn a seat at her table? The biggest stars of my lifetime—Julia Roberts, Brad Pitt, Nicolas Cage, Sandra Bullock—never graduated past magazine covers to souvenir magnets.

If Hollywood played by its old rules, I, Tonya’s Margot Robbie and Call Me by Your Name’s Armie Hammer should be huge stars. They’re funny, smart, self-aware, charismatic, and freakishly attractive. Yet, they feel like underdogs, and I’m trying to figure out why. Robbie has made intelligent choices. Her scene-stealing introduction as Leonardo DiCaprio’s trophy wife in Wolf of Wall Street. Her classic romantic caper with Will Smith in the underseen trifle, Focus. She even survived Suicide Squad with her dignity intact. In I, Tonya, she can’t outskate being miscast as Tonya Harding, but bless her heart for trying. As for Hammer, Kameron, your review of Call Me by Your Name called him, “royally handsome,” which seems right. He’s as ridiculously perfect as a cartoon prince, and I loved how Luca Guadagnino made a joke of how outlandish the 6-foot-5 blond looks in the Italian countryside. Whether he’s unfurling himself from a tiny Fiat or stopping conversation with his gangly dance moves, he can’t blend in—and good on him and Guadagnino for embracing it.

But even if Robbie and Hammer each claim an Oscar nomination this year, I suspect they’ll stay stalled out in this strange time when great actors are simply supporting players in a superhero franchise. I’m fascinated by Robbie and Hammer because they’re like fossils of some alpha carnivore that should have thrived. Does anyone else feel like the tectonic plates under Hollywood have shifted and we’re now staring at the evidence that everything we know is extinct? It’s not just that the old rules have changed—no new rules have replaced them. No one seems to know what works.

Nicholson goes on to cite Will Smith, who once had huge hits seemingly with every movie he made and who is now on a long cold streak.

I'm wary of all conclusions drawn about media in the scarcity age, including the idea that people went to see movies because of movie stars. It's not that Will Smith isn't charismatic. He is. But I suspect Will Smith was in a lot of hits in the age of scarcity in large part because there weren't a lot of other entertainment options vying for people's attention when Independence Day or something of its ilk came out, like clockwork, to launch the summer blockbuster season.

The same goes for the general idea that any one star was ever the chief engine for a film's box office. If the idea that people go see a movie just to see any one star was never actually true, we can stop holding the modern generation of movie stars to an impossible standard.

The same mistake, I think, is being made about declining NFL ratings. Owners blame players kneeling for the national anthem, but here's my theory: in an age of infinite content, NFL games measure up poorly as entertainment, especially for a generation that grew up with smartphones and no cable TV and thus little exposure to American football. If I weren't in two fantasy football leagues with friends and coworkers, I would not have watched a single game this season, and that's a Leftovers-scale flash-forward twist for a kid who once recorded the Superbowl Shuffle to cassette tape off a local radio broadcast just to practice the lyrics.

If you disregard any historical romantic notions and examine the typical NFL football game, it is mostly dead time (if you watch a cut-down version of a game using Sunday Ticket, only about 30 minutes of a 3 to 3.5 hr game involves actual game action), with the majority of plays involving action of only incremental consequence, whose skill and strategy on display are opaque to most viewers and which are explained poorly by a bunch of middle-aged white men who know little about how to sell the romance of the game to a football neophyte. Several times each week, you might see a player hit so hard that they lie on the ground motionless, or with their hands quivering, foreshadowing a lifetime of pain, memory loss, and depression brought on by irreversible brain damage. If you tried to pitch that show concept just on its structural merits you'd be laughed out of the room in Hollywood.

Cultural products must regenerate themselves for each successive age and generation or risk becoming like opera or the symphony is today. I had season tickets to the LA Phil when I lived in Los Angeles, and I brought a friend to the season opener one year. A reporter actually stopped us as we walked out to interview us about why we were there, so mysterious it was to see two attendees who weren't old enough to have been contemporaries of the composer of the music that night (Mahler).

Yes, football has been around for decades, but most of those were in an age of entertainment scarcity. During that time the NFL capitalized on being the only game in town on Sundays, capturing an audience that passed on the game and its liturgies to their children. Football resembles a religion or any other cultural social network; humans being a tribal creature, we find products that satisfy that need, and what are professional sports leagues but an alliance of clans who band together for the network effects of ritual tribal warfare?

Because of its long incubation in an era of low entertainment competition, the NFL built up massive distribution power and enormous financial coffers. That it is a cultural product transmitted by one generation to the next through multiple channels means it's not entirely fair to analyze it independent of its history; cultural products have some path dependence.

Nevertheless, even if you grant it all its tailwinds, I don't trust a bunch of rich old white male owners who grew up in such favorable monopolistic conditions to both understand and adapt in time to rescue the NFL from continued decline in cultural relevance. They are like tortoises who grew up in the Galapagos Islands, shielded on all sides from predators by the ocean, who one day see the moat dry up, connecting them all of a sudden to other continents where an infinite variety of fast-moving predators dwell. I'm not sure the average NFL owner could unlock an iPhone X, let alone understand the way its product moves through modern cultural highways.

Other major sports leagues are in the same boat though most aren't as oblivious as the NFL. The NBA has an open-minded commissioner in Adam Silver and some younger owners who made their money in technology and at least have one foot in modernity. As a sport, the NBA has some structural advantages over other sports (for example, the league has fewer players whose faces are seen and who are active on social media in an authentic way), but the league also helps by allowing highlights of games to be clipped and shared on social media and by encouraging its players to cultivate public personas that act as additional narrative fodder for audiences.

I remember sitting in a meeting with some NFL representatives as they outlined a long list of their restrictions for how their televised games could be remixed and shared by fans on social media. Basically, they wanted almost none of it and would pursue take-downs through all the major social media companies.

Make no mistake, one possible successful strategy in this age of abundant media is to double down on scarcity. It's often the optimal strategy for extracting the maximum revenue from a motivated customer segment. Taylor Swift and other such unicorns can only release their albums on CD for a window to maximize financial return from her superfans before releasing the album on streaming services, straight from the old media windowing playbook.

However, you'd better be damn sure your product is unique and compelling to dial up that tactic because the far greater risk in the age of abundance is that you put up walls around your content and set up a bouncer at the door and no one shows up because there are dozens of free clubs all over town with no cover charge.

Sports have long had one massive advantage in production costs over scripted entertainment like TV and movies, and that is that their narrative engine is a random number generator (RNG). If you want to produce the next hot streaming series, you have to pay millions of dollars to showrunners and writers to generate some narrative.

In sports, the narrative is embedded in the rules of the game. Some players will compete, and someone will win. It's the same script replayed every night, but the RNG produces infinite variations that then spin off infinite variations of the same narratives for why a game turned out one way or the other, just as someone has to make up a story every day to explain why the stock market went up or down. At last check, RNG hadn't found representation with CAA or WME or UTA and thus its services remain free.

Unfortunately for major sports, this advantage is now a weakness as sports narrative is much more brittle than its entertainment counterparts. Narrative is a hedge against disaggregation and unbundling, and that is a critical moat in this age of social media and the internet.

One way to measure entertainment value on this dimension is to ask whether you can read a summary of a narrative and enjoy it almost as much as consuming the original narrative in its native medium. My classic test of this is for movies and TV shows. If you can enjoy a movie just as much by reading the Wikipedia plot summary as by watching it, or if you can enjoy a TV shows almost as much by reading a recap than by bingeing it on your sofa, then it wasn't really that great a movie or TV show to begin with.

Instead of watching the entire last season of Game of Thrones when it returns in 2019, I offer you the alternative of just reading textual recaps to your hearts content online. Is that as enticing an alternative as actually watching all six or seven episodes? You'll ingest all the plot details either way, but for the vast majority of fans this would be a gut-wrenching downgrade.

My other test of narrative value is a variant of the previous compression test. Can you enjoy something just as much by just watching a tiny fraction of the best moments? If so, the narrative is brittle. If you can watch just the last scene of a movie and get most or all the pleasure of watching the whole thing, the narrative didn't earn your company for the journey.

Much more of sports fails this second test than many sports fans realize. I can watch highlights of most games on ESPN or HouseofHighlights on Instagram and extract most of the entertainment marrow and cultural capital of knowing what happened without having to sit through three hours of mostly commercials and dead time. That a game can be unbundled so easily into individual plays and retain most of its value to me might be seen as a good thing in the age of social media, but it's not ideal for the sports leagues if those excerpts are mostly viewed outside paywalls.

This is the bind for major sports leagues. On the one hand, you can try to keep all your content inside the paywall. On the other hand, doing so probably means you continue hemorrhaging  cultural share. This is the eternal dilemma for all media companies in the age of infinite content.

Two nights ago, I watched a clip of multiple angles of Tua Tagovailoa ripping a laser beam of a pass to win the National Championship for Alabama. I didn't watch it live, or on ESPN. I watched it on HouseofHighlights on Instagram, where, instead of hearing some anchor on Sportscenter basically tell me what I can see with my own eyes, the video spins around after a moment to reveal the stunned face of the fan who just witnessed the pass live, reaction videos being a new sort of genre which allows a person in the video to act as the emoji reaction caption from within the video itself, speaking a visual language that most young people of this YouTube/Snapchat generation are already familiar with but which traditional media doesn't notice, let alone grok.

This disaggregation problem extends to ESPN, currently still the 400 pound gorilla in the sports media jungle (reminder, there are no 800 lb gorillas). The network suspended Jemele Hill for tweeting something negative about Trump, using the same playbook as the NFL, who threatened players with suspension for kneeling for the national anthem. Both believed these actions on the part of their talent were harming the value of their product.

The irony is that if both ESPN and the NFL had let these things play out naturally, I suspect at worst it would have been neutral, and at best it might have increased their ratings. For the NFL, the ties to modern movements for social justice might have kept the league and its games in the national conversation and made it tangentially relevant to the next generation. The most culturally relevant bit of Sportscenter today may just be the Sportscenter Top 10, as athletes who make a stunning play routinely tell reporters they are excited to see if they'll be featured on that evening's roundup of the top 10 plays.

Unfortunately, many athletes already see an appearance in HouseofHighlights as the social media alternative to appearing in the Sportscenter Top 10. If you follow top athletes on Instagram, you can see which of them favorite posts on HouseofHighlights. Lebron James routinely favorites posts, as do many other stars. Since many of those athletes follow each other on Instagram, that feature of Instagram produces common knowledge. It's not just that Donovan Mitchell knows that Lebron James favorited a HouseofHighlights clip of him dunking, it's that Mitchell knows that James knows that Mitchell knows and so on.

For ESPN, hewing to the idea that only highlights presented dispassionately or games broadcast respectfully are key to their value is a risky one. Not that they haven't generated a ton of wealth from doing so, and not that TV broadcast rights to major sports aren't still extremely valuable, but those are much more fixed commodities, available to the highest bidder, and ones whose value are close to their peaks, if not past them. This can't be a complete surprise within the four walls of their corporate offices given how much salary and air time they devote to blowhards like Stephen A. Smith and Skip Bayless, but their hesitance to lean into cultivating more original voices will haunt them in the long run. The average caption on an Instagram clip of a major sports league highlight is about twice as likely to be fresh and contextually humorous to a young person than any amount of generic sportscaster hooey spouted on ESPN.

This vulnerability extends to their online presences. I still visit on the web and on my mobile devices to get my sports news roundup each day, but sometime in the past few years, the designs of all these presences shifted dramatically. Gone was a hierarchical layout with different sized headlines and groupings of stories. In its place is a long center gutter of updates from a variety of sports leagues, in modern news feed style.

One can see why they went this way, it made ESPN more current, allowing them to push the latest stories to the top of the page to compete with people getting more current updates from Twitter and other social media sites. For a smartphone, in particular, with its limited screen space, it's not easy to block content into multiple sections on one page.

However, the moment you copy someone else's design, you've shifted the terms of the debate in their favor. In a previous era, ESPN's visually distinct information hierarchy set itself up as the authority on what stories mattered. In the new design, what matters skews towards what's the last thing to happen. It's all flow.

To some extent, in our hyper-personalized world, the era of any media entity deciding what stories matter more than others was always going to decline from what might be seen now as a temporary heyday. I care more about Chicago sports teams and Stanford given my background, so having those elements given more prominence was a notable improvement in the site's newly personalized design. Still, what is lost is that sense of authority, that ESPN sets the terms of the debate. Humans remain a social animal, and we take cues about what matters from our each other, including our media entities. ESPN has ceded more and more of the work of determining our sports Schelling points to other entities.

While this may sound grim, the major sports, their respective leagues, and ESPN all have a fairly solid near term window. For one thing, sports is still the highest volume, highest popularity real-time entertainment. As such, it remains a linchpin of many entertainment packages including cable bundles, and so we'll see various media companies pouring money into it until it can't hold things together anymore. We may even see the prices bid even higher for some time as often happens for assets being milked for their last but fleeting window of cultural scarcity.

A second and less discussed factor is that most young tech CEO's don't know the first thing about sports. They, like a sizable part of Silicon Valley (the group that tweets #sportsball whenever Twitter is inundated with reactions to some notable sports event), grew up with other interests. Without that intuitive sense of sports' place in culture, they aren't as attuned to the opportunities in that category.

This provides the leagues opportunities to swindle the tech companies for a while longer, an example being the rights to stream Thursday Night Football, which a series of tech companies from Yahoo to Twitter to Amazon have (probably) overpaid for the last few seasons. As Patrick Stewart said in L.A. Story, "You think with a statement like this you can have the duck?!" The chef says, "He can have the chicken!" Thursday Night Football is zee chicken of the NFL broadcast portfolio, but the restaurant is still called L'Idiot.

This happened for tech companies when they tried to add film and television to their portfolio, too. They routinely paid fortunes for the rights to back seasons of shows that are no longer relevant anymore. When I was at Hulu, I could only shake my head when I heard the asking price for all the back seasons of Seinfeld. Years later, long after I'd left, Hulu paid multiples of that. The cultural decay curve for content in this age of abundance is accelerating by the day, and there is no equivalent of botox to ward it off.

Given market feedback, however, such temporary arbitrage never lasts long. The days of the NFL strong-arming its partners to overpay for the most meager of rights are coming to an end. The thing about setting up a moat around your content is that the moment your cultural value crosses its peak, the moat becomes a set of prison bars. The flywheel loop can turn just as furiously counter-clockwise as clockwise.

And one of these days, a tech company will look at ESPN's homepage and notice how much it looks like their own. If they just put a bit more structure around it, could they satisfy that sports itch for their captive audience which already check in with them multiple times a day?

It seems implausible today, but look at what happened in film and television. For the longest time, so many tech companies were guilty of exactly what Hollywood accused them of, not understanding how film and television is made and marketed, how that industry creates demand for its product. Like all engineering led-cultures, Silicon Valley suspected Hollywood of not being data-driven enough, and many suspected that upstream process failures were responsible for failed releases. Half a film's budget is spent on prints and marketing? What a waste! (Engineers despise marketing.)

Forget that most of these people in tech had never been on a film set, or sat inside a writer's room, or seen the volumes of market research done before any film's release. It's all just content, let's just crowd source some alternatives. Or, if we produce some premium content, what's needed is earlier crowd-sourced feedback. Hundreds of millions of dollars were wasted before Silicon Valley realized they didn't know what they were doing.

Fortunately, all it cost them was some money and some time, something most of the incumbents have a surplus of. Now they write checks to creatives in Hollywood and leave them alone to do what they do very well already. Machine learning improves with data even when the algorithms are off, and so do most tech companies.

I am a lifelong lover of media in all its forms, and sports in particular was central to how I assimilated into America. It has long served as cultural connective tissue between me and friends, family, and strangers. But if I had an easy way to short all the major sports leagues over the next decade, I would. Nostalgia serves many purposes, but its most dangerous one is wrapping us in a memory of a time when we were still relevant.

Drawing invisible boundaries in conversational interfaces

One of the things anyone who has worked on textual conversation interfaces, like chatbots, will tell you is that the challenge is dealing with the long tail of crazy things people will type. People love to abuse chatbots. Something about text-based conversation UI's invites Turing tests. Every game player remembers the moment they first abandoned their assigned mission in Grand Theft Auto to start driving around the city crashing into cars, running over pedestrians, just to exercise their freedom and explore just what happens when they escape the plot mechanic tree.

However, this type of user roaming or trolling happens much less with voice interfaces. Sure, the first time a user tries Siri or Alexa or whatever Google's voice assistant is called (it really needs a name, IMO, to avoid inheriting everything the word "Google" stands for), they may ask something ridiculous or snarky. However, that type of rogue input tends to trail off quickly, whereas it doesn't in textual conversation UI's.

I suspect some form of the uncanny valley and blame the affordances of text interfaces. Most text conversation UI's are visually indistinguishable from those of a messaging UI used to communicate primarily with other human beings. Thus it invites the user to probe its intelligence boundaries. Unfortunately, the seamless polish of the UI isn't matched by the capabilities of chatbots today, most of which are just dumb trees.

On the other hand, none of the voice assistants to date sounds close to replicating the natural way a human speaks. These voice assistants may have more human timbre, but the stiff elocution, the mispronunciations, the frequent mistakes in comprehension, all quickly inform the user that what they are dealing with is something of quite limited intelligence. The affordances draw palpable, if invisible, boundaries in the user's mind, and they quickly realize the low ROI on trying anything other than what is likely to be in the hard-coded response tree. In fact, I'd argue that the small jokes that these UI's insert, like answering random questions like "what is the meaning of life?" may actually set these assistants up to disappoint people even more by encouraging more such questions the assistant isn't ready to answer (I found it amusing when Alexa answered my question, "Is Jon Snow dead?" two seasons ago, but then was disappointed when it still had the same abandoned answer a season later, after the question had already been answered by the program months ago).

The same invisible boundaries work immediately when speaking to one of those automated voice customer service menus. You immediately know to speak to these as if you're addressing an idiot who is also hard of hearing, and the goal is to complete the interaction as quickly as possible, or to divert to a human customer service rep at the earliest possible moment.

[I read on Twitter that one shortcut to get to a human when speaking to an automated voice response system is to curse, that the use of profanity is often a built-in trigger to turn you over to an operator. This is both an amusing and clever design but also feels like some odd admission of guilt on the part of the system designer.]

It is not easy, given the simplicity of textual UIs, to lower the user's expectations. However, given where the technology is for now, it may be necessary to erect such guardrails. Perhaps the font for the assistant should be some fixed-width typeface, to distinguish it from a human. Maybe some mechanical sound effects could convey the robotic nature of the machine writing the words, and perhaps the syntax should be less human in some ways, to lower expectations.

One of the huge problems with voice assistants, after all, is that the failures, when they occur, feel catastrophic from the user perspective. I may try a search on Google that doesn't return the results I want, but at least something comes back, and I'm usually sympathetic to the idea that what I want may not exist in an easily queryable form on the internet. However, though voice assistant errors occur much less frequently than before, when they do, it feels as if you're speaking to a careless design, and I mean careless in all sense of the word, from poorly crafted (why didn't the developer account for this obvious query) and uncaring (as in emotionally cold).  

Couples go to counseling over feeling as if they aren't being heard by each other. Some technology can get away with promising more than they can deliver, but when it comes to tech that is built around conversation, with all the expectations that very human mode of communication has accrued over the years, it's a dangerous game. In a map of the human brain, the neighborhoods of "you don't understand" and "you don't care" share a few exit ramps.

10 more browser tabs

Still trying to clear out browser tabs, though it's going about as well as my brief flirtation with inbox zero. At some point, I just decided inbox zero was a waste of time, solving a problem that didn't exist, but browser tab proliferation is a problem I'm much more complicit in.

1. Why the coming-of-age narrative is a conformist lie

From a more sociological perspective, the American self-creation myth is, inherently, a capitalist one. The French philosopher Michel Foucault theorised that meditating and journalling could help to bring a person inside herself by allowing her, at least temporarily, to escape the world and her relationship to it. But the sociologist Paul du Gay, writing on this subject in 1996, argued that few people treat the self as Foucault proposed. Most people, he said, craft outward-looking ‘enterprising selves’ by which they set out to acquire cultural capital in order to move upwards in the world, gain access to certain social circles, certain jobs, and so on. We decorate ourselves and cultivate interests that reflect our social aspirations. In this way, the self becomes the ultimate capitalist machine, a Pierre Bourdieu-esque nightmare that willingly exploits itself.
‘Growing up’ as it is defined today – that is, as entering society, once and for all – might work against what is morally justifiable. If you are a part of a flawed, immoral and unjust society (as one could argue we all are) then to truly mature is to see this as a problem and to act on it – not to reaffirm it by becoming a part of it. Classically, most coming-of-age tales follow white, male protagonists because their integration into society is expected and largely unproblematic. Social integration for racial, sexual and gender minorities is a more difficult process, not least because minorities define themselves against the norm: they don’t ‘find themselves’ and integrate into the social context in which they live. A traditional coming-of-age story featuring a queer, black girl will fail on its own terms; for how would her discovering her identity allow her to enter a society that insists on marginalising identities like hers? This might seem obvious, but it very starkly underscores the folly of insisting on seeing social integration as the young person’s top priority. Life is a wave of events. As such, you don’t come of age; you just age. Adulthood, if one must define it, is only a function of time, in which case, to come of age is merely to live long enough to do so.

I've written about this before, but almost always, the worst type of film festival movie is about a young white male protagonist coming of age. Often he's quiet, introverted, but he has a sensitive soul. As my first year film school professor said, these protagonists are inert, but they just "feel things." Think Wes Bentley in American Beauty filming a plastic bag dancing in the wind for fifteen minutes with a camcorder, then showing it to a girl as if it's Citizen Kane.

If they have any scars or wounds, they are compensated for with extreme gifts. Think Ansel Elgort in Baby Driver; cursed with tinnitus since childhood, he listens to music on a retro iPod (let's squeeze some nostalgic product placement in here, what the hell, we're also going to give him a deaf black foster father to stack the moral cards in his favor, might as well go all the way) and is, that's right, the best getaway driver in the business.

Despite having about as much personality as a damp elephant turd, their beautiful souls are both recognized and extracted by a trope which this genre of film invented just for this purpose, the manic pixie dream girl.

[Nathan Rabin, who invented the term manic pixie dream girl, has since disavowed the term as sometimes misogynist, and it can be applied too broadly like a hammer seeking nails, but it doesn't undo the reality that largely white male writing blocs, from guilds to writer's rooms, aren't great at writing women or people of color with deep inner lives.]

This is tangential to the broader point, that the coming-of-age story as a genre is, in and of itself, a lie. It reminds me of the distinction between Finite and Infinite Games, the classic book from James Carse. The Hollywood film has always promised a finite game, and thus it's a story that must have an ending. Coming-of-age is an infinite game, or at least until death, and so we should all be skeptical of its close-ended narrative.

(h/t Michael Dempsey)

2. Finite and Infinite Games and The Confederate

This isn't a browser tab, really, but while I'm on the topic of Carse's Finite and Infinite Games, a book which provides a framework with which so much of the world can be bifurcated, and while I'm thinking about the white male dominated Hollywood profession, I can't help but think of the TV project The Confederate, by the showrunners of Game of Thrones.

"White people” is seen by many whites as a pejorative because it lowers them to a racial class whereas before they were simply the default. They are not accustomed to having spent their entire lives being named in almost every piece of culture as a race, the way women, people of color, and the union of the two are, every single day, by society and culture.

All Lives Matter retort to Black Lives Matter is to pretend that we're all playing the same finite game when almost everyone who are losers in that game know it is not true. Blacks do not feel like they “won” the Civil War; every day today they live with the consequences and the shadow of America's founding racism, every day they continue to play a game that is rigged against them. That is why Ta Nehisi Coates writes that the question of The Confederate is a lie, and that only the victors of this finite game of America would want to relitigate the Civil War in some Alt History television show for HBO. It's as if a New England Patriot fan asked an Atlanta Falcons fan to watch last year's Super Bowl again, with Armie Hammer playing Tom Brady.

“Give us your poor, your huddled” is a promise that the United States is an infinite game, an experiment that struggles constantly towards bettering itself, evening the playing field, such that even someone starting poor and huddled might one day make a better life and escape their beginning state. That is why Stephen Miller and other white nationalists spitting on that inscription on the Statue of Liberty is so offensive, so dangerous.

On society, Carse writes:

The prizes won by its citizens can be protected only if the society as a whole remains powerful in relation to other societies. Those who desire the permanence of their prizes will work to sustain the permanence of the whole. Patriotism in one or several of its many forms (chauvinism, racism, sexism, nationalism, regionalism) is an ingredient in all societal play. 
Because power is inherently patriotic, is is characteristic of finite players to seek a growth of power in a society as a way of increasing the power of a society.

Colin Kaepernick refusing to stand for the National Anthem is seen as unpatriotic by many in America, including the wealthy white owners of such teams, which is not surprising, as racism is a form of patriotism, per Carse, and part and parcel of American society when defined as a finite game.

Donald Trump and his large adult sons are proof of just how powerful the inheritance of title and money are in America, and the irony that they are elected by those who feel that successive rounds of finite games have started to be rigged against them is not lost on anyone, not even, I suspect, them. One could argue they need to take a lesson from those oppressed for far longer as to how a turn to nihilism works out in such situations.

Those attacking Affirmative Action want to close off the American experiment and turn it into a series of supposedly level finite games because they have accumulated a healthy lead in this game and wish to preserve it in every form.

White nationalists like Trump all treat America as not just a finite game, but a zero sum finite game. The idea of immigrants being additive to America, to its potential, its output, is to treat America as an infinite game, open-ended. The truth lies, as usual, between the poles, but closer to the latter.

Beware the prophet who comes with stories of zero-sum games, or as Jim Collins once wrote, beware the "tyranny of the or." One of my definitions of leadership is the ability to turn zero-sum into positive sum games.

3. Curb Your Enthusiasm is Running Out of People to Offend

Speaking of fatigue with white male protagonists:

But if Larry David’s casual cruelty mirrors the times more than ever, the show might still fit awkwardly in the current moment. Watching the première of Season 9 on Sunday night, I kept thinking of a popular line from George Costanza, David’s avatar on “Seinfeld”: “You know, we’re living in a society!” Larry, in this first episode of the season, seems to have abandoned society altogether. In the opening shot, the camera sails over a tony swath of L.A., with no people and only a few cars visible amid the manicured lawns and terra-cotta roofs. It descends on Larry’s palatial, ivy-walled house, where he showers alone, singing Mary Poppins’s “A Spoonful of Sugar” and bludgeoning a bottle of soap. (Its dispenser pump is broken—grounds for execution under the David regime.) He’s the master of his domain, yes, but only by default: no one else is around.
“Curb” has always felt insulated, and a lot of its best jokes are borne of the fact that Larry’s immense wealth has warped his world view over the years. (On the most recent season he had no compunction about spending a princely sum on Girl Scout Cookies, only to rescind the order out of spite.) But the beginning of Season 9 offers new degrees of isolation. Like a tech bro ensconced in a hoodie and headphones, Larry seems to have removed himself almost entirely from public life. Both “Curb” and “Seinfeld” like to press the limits of etiquette and social mores, but the latter often tested these on subway cars and buses, in parks or on the street. Much of “Curb,” by contrast, unfolds in a faceless Los Angeles of air-conditioned mansions, organic restaurants, and schmoozy fund-raisers, a long chain of private spaces. The only time Larry encounters a true stranger, it’s in the liminal zone between his car and the lobby of Jeff’s office. She’s a barber on her way to see Jeff at work—even haircuts happen behind closed doors now.

Groundhog Day, one of the great movies, perhaps my favorite Christmas movie of all time, has long been regarded a great Buddhist parable

Groundhog Day is a movie about a bad-enough man—selfish, vain, and insecure—who becomes wise and good through timeless recurrence.

If that is so, then Curb Your Enthusiasm is its dark doppelganger, a parable about the dark secret at the heart of American society, that no person, no matter how selfish, vain, and petty, can suffer the downfall necessary to achieve enlightenment, if he is white and a man. 

In this case, he is a successful white man in Hollywood, Larry David, and each episode of Curb Your Enthusiasm is his own personal Groundhog Day. Whereas Bill Murray wakes up each morning to Sonny and Cher, trapped in Punxsutawney, Pennsylvania, around small town people he dislikes, in a job he feels superior to, Larry David wakes up each morning in his Los Angeles mansion, with rewards seemingly only proportionate to the depths of his pettiness and ill humor. Every episode, he treats all the friends and family around him with little disguised disdain, and yet the next episode, he wakes up in the mansion again.

Whereas Bill Murray eventually realizes the way to break out of his loop is to use it for self-improvement, Larry David seems to be striving to fall from grace by acting increasingly terrible and yet finds himself back in the gentle embrace of his high thread count sheets every morning.

Curb Your Enthusiasm has its moments of brilliance in its minute dissection of the sometimes illogical and perhaps fragile bonds of societal goodwill, and its episode structure is often exceedingly clever, but I can't help watching it now as nothing more than an acerbic piece of performance art, with all the self absorption that implies.

Larry David recently complained about the concept of first world problems, which is humorous, as it's difficult to think of any single person who has done as precise a job educating the world on what they are.

[What about Harvey Weinstein and Louis C.K., you might ask? Aren't they Hollywood royalty toppled from lofty, seemingly untouchable perches? The story of how those happened will be the subject of another post, because the mechanics are so illuminating.]

4. Nathan for You

I am through season 2 of Nathan for You, a Comedy Central show that just wrapped its fourth and final season. We have devalued the term LOL with overuse, but no show has made me literally laugh out loud by myself, on the sofa, as this, though I've grinned in pleasure at certain precise bits of stylistic parody of American Vandal.

Nathan Fielder plays a comedic version of himself. In the opening credits, he proclaims:

My name is Nathan Fielder, and I graduated from one of Canada's top business schools with really good grades [NOTE: as he says this, we see a pan over his transcript, showing largely B's and C's]. Now I'm using my knowledge to help struggling small business owners make it in this competitive world.

If you cringed while watching a show like Borat or Ali G, if you wince a bit when one of the correspondents on The Daily Show went to interview some stooge, you might believe Nathan For You isn't, well, for you. However, the show continues to surprise me.

For one thing, it's a deeply useful reminder of how difficult it is for physical retailers, especially mom and pop entrepreneurs, to generate foot traffic. That they go along with Fielder's schemes is almost tragic, but more instructive.

For another, while almost every entrepreneur is the straight person to Fielder's clown, I find myself heartened by how rarely one of them just turns him away outright. You can see the struggle on each of their faces, as he presents his idea and then stares at them for an uncomfortably long silence, waiting for them to respond. He never breaks character. Should they just laugh at him, or throw him out in disgust? It almost never happens, though one private investigator does chastise Fielder for being a complete loser.

On Curb Your Enthusiasm, Larry David's friends openly call him out for his misanthropy, yet they never abandon him. On Nathan For You, small business owners almost never adopt Fielder's ideas at the end of the trial. However, they almost never call him out as ridiculous. Instead, they try the idea with a healthy dose of good nature at least once, or at least enough to capture an episode's worth of material.

In this age of people screaming at each other over social media, I found this reminder of the inherent decency of people in face to face situations comforting and almost reassuring. Sure, some people are unpleasant both online and in person, and some people are pleasant in person and white supremacists in private.

But some people try to see the best in each other, give others the benefit of the doubt, and on such bonds a civil society are maintained. That this piece of high concept art could not fence in the humanity and real emotion of all the people participating, not even that of Fielder, is a bit of pleasure in this age of eye-rolling cynicism.

[Of course, these small business owners are aware a camera is on them, so the Heisenberg Principle of reality television applies. That a show like this, which depend on the subjects not knowing about the show, lasted four full seasons is a good reminder of how little-watched most cultural products are in this age of infinite content.]

BONUS CONTENT NO ONE ASKED FOR: Here is my Nathan for You idea: you know how headline stand-up comedians don't come on stage to perform until several lesser known and usually much lousier comics are trotted out to warm up the crowd? How, if you attend the live studio taping of a late night talk show like The Daily Show or The Tonight Show, some cheesy comic comes out beforehand to get your laugh muscles loose, your vocal chords primed? And when the headliner finally arrives, it comes as sweet relief?

What if there were an online dating service that provided such a warm-up buffoon for you? That is, when you go on a date, before meeting your date, first the service sends in a stand-in who is dull, awkward, a turn off in every way possible? But a few minutes into what seems to be a disastrous date, you suddenly show up and rescue the proceedings?

It sounds ridiculous, but this is just the sort of idea that Nathan for You would seem to go for. I haven't watched seasons 3 and 4 yet, so if he does end up trying this idea in one of those later episodes, please don't spoil it for me. I won't even be mad that my idea was not an original one, I'll be so happy to see actual footage of it in the field.

5. The aspect ratio of 2:00 to 1 is everywhere

I first read the case for 2:00 to 1 as an aspect ratio when legendary cinematographer Vittorio Storaro advocated for it several years ago. He anticipated a world where most movies would have a longer life viewed on screens at home than in movie theaters, and 2:00 to 1, or Univisium, is halfway between the typical 16:9 HDTV aspect ratio and Panavision, or 2:35 to 1.

So many movies and shows use 2:00 to 1 now, and I really prefer it to 16:9 for most work.

6. Tuning AIs through captchas

Most everyone has probably encountered the new popular captcha which displays a grid of photos and asks you to identify which contain a photo of a store front. I just experienced it recently signing up for HQTrivia. This breed of captcha succeeds the wave of captchas that showed photos of short strings of text or numbers and asked you to type in what you saw, helping to train AIs trying to learn to read them. There are variants of the store front captcha: some ask you to identify vehicles, others to identify street signs, but the speculation is that Google uses these to train the "vision" of its self-driving cars.

AI feels like magic when it works, but underrated is the slow slog to take many AI's from stupid to competent. It's no different than training a human. In the meantime, I'm looking forward to being presented with the captcha that shows two photos, one of a really obese man, the other of five school children, with this question above them: "If you had to run over and kill the people in one of these photos, which would you choose?"

7. It's Mikaela Shiffrin profile season, with this one in Outside and this in the New Yorker

I read Elizabeth Weil's profile of Shiffrin in Outside first:

But the naps: Mikaela not only loves them, she’s fiercely committed to them. Recovery is the most important part of training! And sleep is the most important part of recovery! And to be a champion, you need a steadfast loyalty to even the tiniest and most mundane points. Mikaela will nap on the side of the hill. She will nap at the start of the race. She will wake up in the morning, she tells me after the gym, at her house, while eating some pre-nap pasta, “and the first thought I’ll have is: I cannot wait for my nap today. I don’t care what else happens. I can’t wait to get back in bed.”
Mikaela also will not stay up late, and sometimes she won’t do things in the after­noon, and occasionally this leads to more people flipping out. Most of the time, she trains apart from the rest of the U.S. Ski Team and lives at home with her parents in Vail (during the nine weeks a year she’s not traveling). In the summers, she spends a few weeks in Park City, Utah, training with her teammates at the U.S. Ski and Snowboard Center of Excellence. The dynamic there is, uh, complicated. “Some sports,” Mikaela says, “you see some athletes just walking around the gym, not really doing anything, eating food. They’re first to the lunchroom, never lifting weights.”

By chance, I happened to be reading The Little Book of Talent: 52 Tips for Improving Your Skills by Daniel Coyle, and had just read tips that sounded very familiar to what was mentioned here.

More echoes of Coyle's book in The New Yorker profile:

My presumption was that her excellence was innate. One sometimes thinks of prodigies as embodiments of peculiar genius, uncorrupted by convention, impossible to replicate or reëngineer. But this is not the case with Shiffrin. She’s as stark an example of nurture over nature, of work over talent, as anyone in the world of sports. Her parents committed early on to an incremental process, and clung stubbornly to it. And so Shiffrin became something besides a World Cup hot shot and a quadrennial idol. She became a case study. Most parents, unwittingly or not, present their way of raising kids as the best way, even when the results are mixed, as such results usually are. The Shiffrins are not shy about projecting their example onto the world, but it’s hard to argue with their findings. “The kids with raw athletic talent rarely make it,” Jeff Shiffrin, Mikaela’s father, told me. “What was it Churchill said? Kites fly higher against a headwind.”

So it wasn't a real surprise to finally read this:

The Shiffrins were disciples of the ten-thousand-hours concept; the 2009 Daniel Coyle book “The Talent Code” was scripture. They studied the training methods of the Austrians, Alpine skiing’s priesthood. The Shiffrins wanted to wring as much training as possible out of every minute of the day and every vertical foot of the course. They favored deliberate practice over competition. They considered race days an onerous waste: all the travel, the waiting around, and the emotional stress for two quick runs. They insisted that Shiffrin practice honing her turns even when just skiing from the bottom of the racecourse to the chairlift. Most racers bomb straight down, their nonchalance a badge of honor.

Coyle's book, which I love for its succinct style (it could almost be a tweetstorm if Twitter had slightly longer character limits, each tip is averages one or two paragraphs long), is the books I recommend to all parents who want their kids to be really great at something, and not just sports.

Much of the book is about the importance of practice, and what types of practice are particularly efficient and effective.

Jeff Shiffrin said, “One of the things I learned from the Austrians is: every turn you make, do it right. Don’t get lazy, don’t goof off. Don’t waste any time. If you do, you’ll be retired from racing by the time you get to ten thousand hours.”
“Here’s the thing,” Mikaela told me one day. “You can’t get ten thousand hours of skiing. You spend so much time on the chairlift. My coach did a calculation of how many hours I’ve been on snow. We’d been overestimating. I think we came up with something like eleven total hours of skiing on snow a year. It’s like seven minutes a day. Still, at the age of twenty-two, I’ve probably had more time on snow than most. I always practice, even on the cat tracks or in those interstitial periods. My dad says, ‘Even when you’re just stopping, be sure to do it right, maintaining a good position, with counter-rotational force.’ These are the kinds of things my dad says, and I’m, like, ‘Shut up.’ But if you say it’s seven minutes a day, then consider that thirty seconds that all the others spend just straight-lining from the bottom of the racecourse to the bottom of the lift: I use that part to work on my turns. I’m getting extra minutes. If I don’t, my mom or my coaches will stop me and say something.”

Bill Simmons recently hosted Steve Kerr for a mailbag podcast, and part I is fun to hear Kerr tell stories about Michael Jordan. Like so many greats, Jordan understood that the contest is won in the sweat leading up to the contest, and his legendary competitiveness elevated every practice and scrimmage into gladiatorial combat. As Kerr noted, Jordan single-handedly was a cure for complacency for the Bulls. 

He famously broke down some teammates with such intensity in practice that they were driven from the league entirely (remember Rodney McCray?). Everyone knows he once punched Steve Kerr and left him with a shiner during a heated practice. The Dream Team scrimmage during the lead in to the 1992 Olympics, in which the coaches made Michael Jordan one captain, Magic Johnson the other, is perhaps the single sporting event I most wish had taken place in the age of smartphones and social media.

What struck me about the Shiffrin profiles, something Coyle notes about the greats, is how many of the lives of the great ones are unusually solitary, spent in deliberate practice on their own, apart from teammates. It's obviously amplified for individual sports like tennis and skiing and golf, but even for team sports, the great ones have their own routines. Not only is it lonely at the top, it's often lonely on the way there.

8. The secret tricks hidden inside restaurant menus

Perhaps because I live in the Bay Area, it feels as if the current obsession is with the dark design patterns and effects of social apps. But in the scheme of things, many other fields whose work we interact with daily have many more years of experience designing to human nature. In many ways, people designing social media have a very naive and incomplete view of human nature, but the power of the distribution of ubiquitous smartphone and network effects have elevated them to the forefront of the conversation.

Take a place like Las Vegas. Its entire existence is testament to the fact that the house always wins, yet it could not exist if it could not convince the next sucker to sit down at the table and see the next hand. The decades of research into how best to part a sucker from his wallet makes the volume of research among social media companies look like a joke, even if the latter isn't trivial.

I have a sense that social media companies are similar to where restaurants are with menu design. Every time I sit down at a new restaurant, I love examining the menus and puzzling over all the choices with fellow diners, as if having to sit with me over a meal isn't punishment enough. When the waiter comes and I ask for an overview of the menu, and recommendations, I'm wondering what dishes the entire experience is meant to nudge me to order.

I'm awaiting the advent of digital and eventually holographic or A/R menus to see what experiments we'll see. When will we have menus that are personalized? Based on what you've enjoyed here and other restaurants, we think you'll love this dish. When will we see menus that use algorithmic sorting—these are the most ordered dishes all-time, this week, today? People who ordered this also ordered this? When will see editorial endorsements? "Pete Wells said of this dish in his NYTimes review..."

Not all movies are worth deep study because not all movies are directed with intent. The same applies to menus, but today, enough menus are put through a deliberate design process that it's usually a worthwhile exercise to put them under the magnifying glass. I would love to read some blog that just analyzes various restaurant menus, so if someone starts one, please let me know.

9. Threat of bots and cheating looms as HQ Trivia reaches new popularity heights

When I first checked out HQ Trivia, an iOS live video streaming trivia competition for cash prizes, the number of concurrent viewers playing, displayed on the upper left of the screen, numbered in the hundreds. Now the most popular of games, which occur twice a day, attract over 250K players. In this age where we've seen empires built on exploiting the efficiencies to be gained from shifting so much of social intimacy to asynchronous channels, it's fun to be reminded of the unique fun of synchronous entertainment.

What intrigues me is not how HQ Trivia will make money. The free-to-play game industry is one of the most savvy when it comes to extracting revenue, and even something like podcasts points the way to monetizing popular media with sponsorships, product placement, etc.

What's far more interesting is where the shoulder on the S-curve is. Trivia is a game of skill, and with that comes two longstanding issues. I've answered, at most, 9 questions in a row, and it takes 12 consecutive right answers to win a share of the cash pot. I'm like most people in probably never being able to win any cash.

This is an issue faced by Daily Fantasy Sports, where the word "fantasy" is the most important word. Very soon after they became popular, DFS were overrun by sharks submitting hundreds or thousands of lineups with the aid of computer programs, and some of those sharks worked for the companies themselves. The "fantasy" being sold is that the average person has a chance of winning.

As noted above in my comment about Las Vegas, it's not impossible to sell people on that dream. The most beautiful of cons is one the mark willingly participates in. People participate in negative expected value activities all the time, like the lottery, and carnival games, and often they're aware they'll lose. Some people just participate for the fun of it, and a free-to-play trivia game costs a player nothing other than some time, even if the expected value is close to zero.

A few people have asked me whether that live player count is real, and I'm actually more intrigued by the idea it isn't. Fake it til you make it is one of the most popular refrains of not just Silicon Valley but entrepreneurs everywhere. What if HQ Trivia just posted a phony live player count of 1 million tomorrow? Would their growth accelerate even more than it has recently? What about 10 million? When does the marginal return to every additional player in that count go negative because people feel that there is so much competition it's not worth it? Or is the promise of possibly winning money besides the point? What if the pot scaled commensurate to the number of players; would it become like the lottery? Massive pots but long odds?

The other problem, linked to the element of skill, is cheating. As noted in the article linked above, and in this piece about the spike in Google searches for answers during each of the twice-a-day games, cheating is always a concern in games, especially as the monetary rewards increase. I played the first game when HQ Trivia had a $7,500 cash pot, and the winners each pocketed something like $575 and change. Not a bad payout for something like 10 minutes of fun.

Online poker, daily fantasy sports, all are in constant battle with bots and computer-generated entries. Even sports books at casinos have to wage battle with sharks who try to get around betting caps by sending in all sorts of confederates to put down wagers on their behalf.

I suspect both of these issues will be dampeners on the game's prospects, but more so the issue of skill. I already find myself passing on games when I'm not with others who also play or who I can rope into playing with me. That may be the game's real value, inspiring communal bonding twice a day among people in the same room.

People like to quip that pornography is the tip of the spear when it comes to driving adoption of new technologies, but I'm partial to trivia. It is so elemental and pure a game, with such comically self-explanatory rules, that it is one of the elemental forms or genres of gaming, just like HQ Trivia host Scott Rogowsky is some paragon of a game-show host, mixing just the right balance of cheesiness and snarkiness and effusiveness needed to convince all the players that any additional irony would be unseemly.

10. Raising a teenage daughter

Speaking of Elizabeth Weil, who wrote the Shiffrin profile for Outside, here's another of her pieces, a profile of her daughter Hannah. The twist is that the piece includes annotations by Hannah after the fact.

It is a delight. The form is perfect for revealing the dimensions of their relationship, and that of mothers and teenage daughters everywhere. In the interplay of their words, we sense truer contours of their love, shaped, as they are, by two sets of hands.

[Note, Esquire has long published annotated profiles, you can Google for them, but they are now all locked behind a paywall]

This format makes me question how many more profiles would benefit from allowing the subject of a piece to annotate after the fact. It reveals so much about the limitations of understanding between two people, the unwitting and witting lies at the heart of journalism, and what Janet Malcolm meant, when she wrote, in the classic opening paragraph of her book The Journalist and the Murderer, "Every journalist who is not too stupid or too full of himself to notice what is going on knows that what he does is morally indefensible."

Helpful tip for data series labels in Excel

I've never received as many emails from readers as I did for my most recent post on line graphs, analytics, Amazon, Excel, and Tufte, among other things. It turns out that countless consultants, bankers, and analysts still wake up in a cold sweat at the recollection of spending hours formatting graphs in Excel. I opened every last Excel spreadsheet you sent me, apparently we all have one or another lying around as a souvenir of our shared trauma.

[It's fun to hear directly from my readers, I encourage more of it. When so much of online discourse is random strangers performing drive-by violence, or chucking Molotov cocktails at you, it's somewhat old-fashioned and comforting to receive a friendly email. It reminds me a lot of the early days of the Internet, a more idyllic time, when Utopian dreams of the power of this technology hadn't been crushed by the darkness in the souls of mankind.]

Many readers just shared their stories of Excel frustration, but a few offered helpful tips. One class of these was just a recommendation to try alternative programs for charting, like Tableau, ggplot, or D3. I have not had time to play with any of these, though I remember working with Tableau briefly at a company that had purchased a license. I'm not familiar enough with any of these to offer any meaningful comparison to Excel. 

I still hope that someone who works on Excel will just upgrade their default charting options because of the sheer number of Microsoft Office users. Most companies don't license Tableau, and many may not have the inclination or time to learn to use ggplot and D3, though the latter, in particular, seems capable of generating some beautiful visualizations. In contrast, I can't recall any company I worked at that didn't offer me a Microsoft Office license if I wanted one.

A few readers mentioned they wrote macros to automate some or most of the formatting tricks I set out in my post. Maybe at some point this will become an open source tool, which I'd love. If I spot one I'll share it here. Excel users are a tight knit community.

A couple readers offered a similar solution to my problem with generating dynamic data series labels, but my favorite, the most comprehensive option, came from a reader named Jeffrey, a finance associate.

It's a clever little hack. Here's how it works:

  1. Fill in your data table in Excel as normal, with data series in rows, time periods (or whatever the x-axis will be) in columns.
  2. Add one duplicate row for each data series below those data series in your table, and name each of those the same as the data series above, in order. For example, if your data series are U.S., Japan, and Australia, you now add three more rows, named U.S., Japan, and Australia.
  3. For these dummy data series, leave all the data cells blank except the last one, which in my example was the year 2014. For those three cells, just use a formula so that that cell equals the corresponding last data point from the corresponding data series above. So, for the dummy data series for Australia, my formula for 2014 just points to the value in the cell for 2014 for the actual Australia data series.
  4. Now create the chart for your data table, including both the actual and dummy data series. In the resulting line graph, you now have six data series instead of three, but the three dummy data series just have a single data point, the last one, which overlaps the corresponding last point in the actual data series.
  5. Do all the visual hacks I mentioned in my previous post. When it comes to your three dummy data series, instead of displaying the actual data label, select "Series Name" in the Format Data Labels menu.

That's it! Now you have data labels that will move with any change in the last data point of your three data series.

It's a hack, so it's not perfect. You still have to update your table each time you add to it, moving the dummy pointers one cell to the right. And selecting the data series when it exactly overlaps the actual data series can be tricky. But the thing is, all my old ways of setting up the charts were hacks, too, and this one saves the work of moving data series labels whenever the scale of the graph is changed.

It would be easier if Excel just offered a way to display the data labels and the series name for every data series. Maybe someday? The communication arc of the universe is long, and I like to think it bends towards greater efficiency.

Thanks Jeffrey! Also, thanks to my old coworker Dave who, more than any of my other readers, sends me precise copy-edits to my posts. I wish I were a better editor of my own work, but familiarity breeds myopia.

Remove the legend to become one

When I started my first job at, as the first analyst in the strategic planning department, I inherited the work of producing the Analytics Package. I capitalize the term because it was both a serious tool for making our business legible, and because the job of its production each month ruled my life for over a year.

Back in 1997, analytics wasn't even a real word. I know because I tried to look up the term, hoping to clarify just I was meant to be doing, and I couldn't find it, not in the dictionary, not on the internet. You can age yourself by the volume of search results the average search engine returned when you first began using the internet in force. I remember when pockets of wisdom were hidden in eclectic newsgroups, when Yahoo organized a directory of the web by hand, and later when many Google searches returned very little, if not nothing. Back then, if Russians wanted to hack an election, they might have planted some stories somewhere in rec.arts.comics and radicalized a few nerds, but that's about it.

Though I couldn't find a definition of the word, it wasn't difficult to guess what it was. Some noun form of analysis. More than that, the Analytics Package itself was self-describing. Literally. It came with a single page cover letter, always with a short preamble describing its purpose, and then jumped into a textual summary of the information within, almost like a research paper abstract, or a Letter to Shareholders. I like to think Jeff Bezos' famous company policy, instituted many years later, banning Powerpoint in favor of written essays, had some origins in the Analytics Package cover letter way back when. The animating idea was the same: if you can't explain something in writing to another human, do you really understand it yourself?

My interview loop at Amazon ended with an hour with the head of recruiting at the time, Ryan Sawyer. After having gone through a gauntlet of interviews that included almost all the senior executives, and including people like Jeff Bezos and Joy Covey, some of the most brilliant people I've ever met in my life, I thought perhaps the requisite HR interview would be a letup. But then Ryan asked me to explain the most complex thing I understood in a way he'd understand. It would be good preparation for my job.

What was within the Analytics Package, that required a written explanation? Graphs. Page after page of graphs, on every aspect of Amazon's business. Revenue. Editorial. Marketing. Operations. Customer Service. Headcount. G&A. Customer sentiment. Market penetration. Lifetime value of a customer. Inventory turns. Usually four graphs to a page, laid out landscape.

The word Package might seem redundant if Analytics is itself a noun. But if you saw one of these, you knew why it was called a Package. When I started at Amazon in 1997, the Analytics Package was maybe thirty to forty pages of graphs. When I moved over to product management, over a year later, it was pushing a hundred pages, and I was working on a supplemental report on customer order trends in addition. Analytics might refer to a deliverable or the practice of analysis, but the Analytics Package was like the phone book, or the Restoration Hardware catalog, in its heft.

This was back in the days before entire companies focused on building internal dashboards and analytical tools, so the Analytics Package was done with what we might today consider as comparable to twigs and dirt in sophistication. I entered the data by hand into Excel tables, generated and laid out the charts in Excel, and printed the paper copies.

One of the worst parts of the whole endeavor was getting the page numbers in the entire package correct. Behind the Analytics Package was a whole folder of linked spreadsheets. Since different charts came from different workbooks, I had to print out an entire Analytics Package, get the ordering correct, then insert page numbers by hand in some obscure print settings menu. Needless to say, ensuring page breaks landed where you wanted them was like defusing a bomb.

Nowadays, companies hang flat screen TVs hanging on the walls, all them running 24/7 to display a variety of charts. Most everyone ignores them. The spirit is right, to be transparent all the time, but the understanding of human nature is not. We ignore things that are shown to us all the time. However, if once a month, a huge packet of charts dropped on your desk, with a cover letter summarizing the results, and if the CEO and your peers received the same package the same day, and that piece of work included charts on how your part of the business was running, you damn well paid attention, like any person turning to the index of a book on their company to see if they were mentioned. Ritual matters.

The package went to senior managers around the company. At first that was defined by your official level in the hierarchy, though, as most such things go, it became a source of monthly contention as to who to add to the distribution. One might suspect this went to my head, owning the distribution list, but in fact I only cared because I had to print and collate the physical copies every month.

I rarely use copy machines these days, but that year of my life I used them more than I will all the days that came before and all the days still to come, and so I can say with some confidence that they are among the least reliable machines ever made by mankind.

It was a game, one whose only goal was to minimize pain. A hundred copies of a hundred page document. The machine will break down at some point. A sheet will jam somewhere. The ink cartridge will go dry. How many collated copies do you risk printing at once? Too few and you have to go through the setup process again. Too many and you risk a mid-job error, which then might cascade into a series of ever more complex tasks, like trying to collate just the pages still remaining and then merging them with the pages that were already completed. [If you wondered why I had to insert page numbers by hand, it wasn't just for ease of referencing particular graphs in discussion; it was also so I could figure out which pages were missing from which copies when the copy machine crapped out.]

You could try just resuming the task after clearing the paper jam, but in practice it never really worked. I learned that copy machine jams on jobs of this magnitude were, for all practical purposes, failures from which the machine could not recover.

I became a shaman to all the copy machines in our headquarters at the Columbia building. I knew which ones were capable of this heavy duty task, how reliable each one was. Each machine's reliability fluctuated through some elusive alchemy of time and usage and date of the last service visit. Since I generally worked late into every night, I'd save the mass copy tasks for the end of my day, when I had the run of all the building's copy machines.

Sometimes I could sense a paper jam coming just by the sound of machine's internal rollers and gears. An unhealthy machine would wheeze, like a smoker, and sometimes I'd put my hands on a machine as it performed its service for me, like a healer laying hands on a sick patient. I would call myself a copy machine whisperer, but when I addressed them it was always a slew of expletives, never whispered. Late in my tenure as analyst, I got budget to hire a temp to help with the actual printing of the monthly Analytics Package, and we keep in touch to this date, bonded by having endured that Sisyphean labor.

My other source of grief was another tool of deep fragility: linked spreadsheets in Excel 97. I am, to this day, an advocate for Excel, the best tool in the Microsoft Office suite, and still, if you're doing serious work, the top spreadsheet on the planet. However, I'll never forget the nightmare of linked workbooks in Excel 97, an idea which sounded so promising in theory and worked so inconsistently in practice.

Why not just use one giant workbook? Various departments had to submit data for different graphs, and back then it was a complete mess to have multiple people work in the same Excel spreadsheet simultaneously. Figuring out whose changes stuck, that whole process of diffs, was untenable. So I created Excel workbooks for all the different departments. Some of the data I'd collect myself and enter by hand, while some departments had younger employees with the time and wherewithal to enter and maintain the data for their organization.

Even with that process, much could go wrong. While I tried to create guardrails to preserve formulas linking all the workbooks, everything from locked cells to bold and colorful formatting to indicate editable cells, no spreadsheet survives engagement with a casual user. Someone might insert a column here or a row there, or delete a formula by mistake. One month, a user might rename a sheet, or decide to add a summary column by quarter where none had existed before. Suddenly a slew of #ERROR's show up in cells all over the place, or if you're unlucky, the figures remain, but they're wrong and you don't realize it.

Thus some part of every month was going through each spreadsheet and fixing all the links and pointers, reconnecting charts that were searching for a table that was no longer there, or more insidiously, that were pointing to the wrong area of the right table.

Even after all that was done, though, sometimes the cells would not calculate correctly. This should have been deterministic. That's the whole idea of a spreadsheet, that the only error should be user error. A cell in my master workbook would point at a cell in another workbook. They should match in value. Yet, when I opened both workbooks up, one would display 1,345 while the other would display 1,298. The button to force a recalculation of every cell was F9. I'd press it repeatedly. Sometimes that would do it. Sometimes it wouldn't. Sometimes I'd try Ctrl - Alt - Shift - F9. Sometimes I'd pray.

One of the only times I cried at work was late one night, a short time after my mom had passed away from cancer, my left leg in a cast from an ACL/MCL rupture, when I could not understand why my workbooks weren't checking out, and I lost the will, for a moment, to wrestle it and the universe into submission. This wasn't a circular reference, which I knew could be fixed once I pursued it to the ends of the earth, or at least the bounds of the workbook. No, this inherent fragility in linked workbooks in Excel 97 was a random flaw in a godless program, and I felt I was likely the person in the entire universe most fated to suffer its arbitrary punishment.

I wanted to leave the office, but I was too tired to go far on my crutches. No one was around the that section of the office at at that hour. I turned off the computer, turned out the lights, put my head down on my desk for a while until the moment passed. Then I booted the PC back up, opened the two workbooks, and looked at the two cells in question. They still differed. I pressed F9. They matched. 

Most months, after I had finished collating all the copies of the Analytics Package, clipping each with a small, then later medium, and finally a large binder clip, I'd deliver most copies by hand, dropping them on each recipient's empty desk late at night. It was a welcome break to get up from my desk and stroll through the offices, maybe stop to chat with whoever was burning the midnight oil. I felt like a paper boy on his route, and often we'd be up at the same hour.

For all the painful memories that cling to the Analytics Package, I consider it one of the formative experiences of my career. In producing it, I felt the entire organism of our business laid bare before me, its complexity and inner working made legible. The same way I imagine programmers visualizing data moving through tables in three dimensional space, I could trace the entire ripple out from a customer's desire to purchase a book, how a dollar of cash flowed through the entire anatomy of our business. I knew the salary of every employee, and could sense the cost of their time from each order as the book worked its way from a distributor to our warehouse, from a shelf to a conveyor belt, into a box, then into a delivery truck. I could predict, like a blackjack player counting cards in the shoe, what % of customers from every hundred orders would reach out to us with an issue, and what % of those would be about what types of issues.

I knew, if we gained a customer one month, how many of their friends and family would become new customers the next month, through word of mouth. I knew if a hundred customers made their first order in January of 1998, what % of them would order again in February, and March, and so on, and what the average basket size of each order would be. As we grew, and as we gained some leverage, I could see the impact on our cash flow from negotiating longer payable days with publishers and distributors, and I'd see our gross margins inch upwards every time we negotiated better discounts off of list prices.

What comfort to live in the realm of frequent transactions and normal distributions, a realm where the laws of large numbers was the rule of law. Observing the consistency and predictability of human purchases of books (and later CDs and DVDs) each month was like spotting some crystal structure in Nature under a microscope. I don't envy companies like Snapchat or Twitter or Pinterest, social networks who have gone public or likely have to someday, companies who play in the social network business, trying to manage investor expectations when their businesses are so large and yet still so volatile, their revenue streams even more so. It is fun to grow with the exponential trajectory of a social network, but not fun if you're Twitter trying to explain every quarter why you missed numbers again, and less fun when you have to pretend to know what will happen to revenue one quarter out, let alone two or three.

At Amazon, I could see our revenue next quarter to within a few percentage points of accuracy, and beyond. The only decision was how much to tell Wall Street we anticipated our revenue being. Back then, we always underpromised on revenue; we knew we'd overdeliver, the only question was how much we should do so and still maintain a credible sense of surprise on the next earnings call.

The depth of our knowledge of our own business continues to exceed that of any company I've worked at since. Much of the credit goes to Jeff for demanding that level of detail. No one can set a standard for accountability like the person at the top. Much credit goes to Joy and my manager Keith for making the Analytics Package one of the strategic planning department's central tasks. That Keith pushed me into the arms of Tufte changed everything. And still more credit belongs to all the people who helped gather obscure bits of data from all parts of the business, from my colleagues in accounting to those in every department in the company, many of whom built their own models for their own areas, maintaining and iterating them with a regular cadence because they knew every month I'd come knocking and asking questions.

I'm convinced that because Joy knew every part of our business as well or better than almost anyone running them, she was one of those rare CFO's that can play offense in addition to defense. Almost every other CFO I've met hews close to the stereotype; always reigning in spending, urging more fiscal conservatism, casting a skeptical eye on any bold financial transactions. Joy could do that better than the next CFO, but when appropriate she would urge us to spend more with a zeal that matched Jeff's. She, like many visionary CEO's, knew that sometimes the best defense is offense, especially when it comes to internet markets, with their pockets of winner-take-all contests, first mover advantages, and network effects.

It still surprises me how many companies don't help their employees understand the numeric workings of their business. One goes through orientation and hears about culture, travel policies, where the supply cabinet is, maybe some discussion of mission statements. All valuable, of course. But when was the last time any orientation featured any graphs on the business? Is it that we don't trust the numeracy of our employees? Do we fear that level of radical transparency will overwhelm them? Or perhaps it's a mechanism of control, a sort of "don't worry your little mind about the numbers" and just focus on your piece of the puzzle?

Knowing the numbers isn't enough in and of itself, but as books like Moneyball make clear, doing so can reveal hidden truths, unknown vectors of value (for example, in the case of Billy Beane and the Oakland A's, on base percentage). To this day, people still commonly talk about Amazon not being able to turn a profit for so many years as if it is some Ponzi scheme. Late one night in 1997, a few days after I had started, and about my third or fourth time reading the most recent edition of the Analytics Package cover to back, I knew our hidden truth: all the naysaying about Amazon's profitless business model was a lie. Every dollar of our profit we didn't reinvest into the business, and every dollar we didn't raise from investors to add to that investment, would be just kneecapping ourselves. The only governor of our potential was the breadth of our ambition.


What does this have to do with line graphs? A month or two into my job, my manager sent me to a seminar that passed through Seattle. It was a full day course centered around the wisdom in one book, taught by the author. The book was The Visual Display of Quantitative Information, a cult bestseller on, the type of long tail book that, in the age before Amazon, might have remained some niche reference book, and the author was Edward Tufte. It's difficult to conjure, on demand, a full list of the most important books I've read, but this is one.

My manager sent me to the seminar so I could apply the principles of that book to the charts in the Analytics Package. My copy of the book sits on my shelf at home, and it's the book I recommend most to work colleagues.

In contrast to this post, which has buried the lede so far you may never find it, Tufte's book opens with a concise summary of its key principles.

Excellence in statistical graphics consists of complex ideas communicated with clarity, precision, and efficiency. Graphics displays should

  • show the data
  • induce the viewer to think about the substance rather than about methodology, graphic design, the technology of graphic production, or something else
  • avoid distorting what the data have to say
  • present many numbers in a small space
  • make large data sets coherent
  • encourage the eye to compare different pieces of data
  • reveal the data at several levels of detail, from a broad overview to the fine structure
  • serve a reasonably clear purpose: description, exploration, tabulation, or decoration
  • be closely integrated with the statistical and verbal descriptions of a data set.

Graphics reveal data. Indeed graphics can be more precise and revealing than conventional statistical computations.

That's it. The rest of the book is just one beautiful elaboration after another of those first principles. The world in one page.

Of all the graphs, the line graph is the greatest. Of its many forms, the most iconic form, the one I used the most in the Analytics Package, has time as the x-axis and the dimension to be measured as the y-axis. Data trended across time.

One data point is one data point. Two data points, trended across time, tell a story. [I'm joking, please don't tell a story using just two data points] The line graph tells us where we've been, and it points to where things are going. In contemplating why the line points up or down, or why it is flat, one grapples with the fundamental mechanism of what's on study.

It wasn't until I'd produced the Analytics Package graphs for several months that my manager granted me the responsibility of writing the cover letter. It was a momentous day, but the actual task of writing the summary of the state of the business wasn't hard. By looking at each graph and investigating why each had changed in which way from last month to this, I had all the key points worth writing up. Building the graphs was more than half the battle.

So many of the principles in Tufte's book made their way into the Analytics Package. For example, where relevant, each page showed a series of small multiples, with the same scale on X and Y axes, back in the age before small multiples were a thing in spreadsheet programs.

Nowhere was Tufte's influence more felt than in our line graphs. How good can a line graph be? After all, in its components, a line graph is really simple. That's a strength, not a weakness. The advice here is simple, so simple, in fact, one might think all of it is common practice already. It isn't. When I see line graphs shared online, even those from some of the smartest people I follow, almost all of them adhere to very little of what I'm going to counsel.

Perhaps Tufte isn't well read enough, his idea not taught in institutions like business schools that require their students to use Excel. That is all true, but I prefer a simpler explanation: users are lazy, the Excel line graph defaults are poor, and Excel is the most popular charting tool on the planet.

By way of illustration, let's take a data set, build a line graph in Excel, and walk through some of what I had to do when making the Analytics Package each month.

I couldn't find the raw data behind most charts shared online, and I didn't want to use any proprietary data. My friend Dan Wang pointed me at the Google Public Data Explorer, a lot of which seems to built off the World Bank Data Catalog, from which I pulled some raw data, just to save me the time of making up figures.

I used health expenditure per capita (current US$). I picked eight countries and used the data for the full range of years available, spanning 1995-2014. I chose a mix of countries I've visited or lived in, plus some people have spoken to me about in reference to their healthcare systems, but the important point here is that limiting the data series on a line graph matters if the graph is going to be readable. How many data series depends on what you want to study and how closely the lines cluster, how large the spread is. Sometimes it's hard to anticipate until you produce the graph, but suffice it to say that generating a graph just to make one is silly if the result is illegible.

Here's what the latest version of Excel on my Mac produced when I pressed the line graph button after highlighting the data (oddly enough, I found a Recommended Charts dropdown button and the three graphs it recommended were three bar graphs of a variety of forms, definitely not the right choice here, among many places where Excel's default logic is poor). I didn't do anything to this graph, just saved it directly as is, at the exact size and formatting Excel selected.

Not great. Applying Richard Thaler and Cass Sunstein's philosophy from Nudge, if we just improved the defaults in Excel and Powerpoint, the graphic excellence the world over would improve by leaps and bounds. If someone out there works on the charting features in Excel and Powerpoint, hear my cries! The power to elevate your common man is in your hands. Please read Tufte.

As an aside, after the Tufte seminar, I walked up to him and asked what software he used for the graphics in his book. His response? Adobe Illustrator. To produce the results he wanted, he, and presumably his assistants, laid out every pixel by hand. Not that helpful for me in producing the Analytics Package monthly on top of my other job duties, but a comment on the charting quality in Excel that rings true even today.

Let's take my chart above and start editing it for the better, as I did back in my Analytics Package days. Let's start with some obvious problems:

  • The legend is nearly the same height as the graph itself
  • A lot of the lines are really close to each other
  • The figures in the left column could be made more readable with thousands comma separators
  • The chart needs a title

I expanded the graph inside the worksheet to make it easier to see, it was the size of about four postage stamps in the sheet for some reason, and fixed the problems above. Here's that modified version.

Excel should add comma separators for thousands by default. The graph is somewhat better, but the labels are still really small, even if you click and expand the photo above to full size. In addition to adjusting the scale of labels and title, however, what else can we do to improve matters?

I began this post just wanting to share the following simple point, the easiest way to upgrade your Excel line graph:

Remove the legend.

That alone will make your line graph so much better that if it's the only thing you remember for the rest of your life, a generation of audiences will thank you.

The problem with a legend is that it asks the user to bounce their eyes back and forth from the graph to the legend, over and over, trying to hold what is usually some color coding system in their short-term memory.

Look at the chart above. Every time I have to see which line is which country, I have look down to the legend and then back to the graph. If I decide to compare any two data series, I have to look back down and memorize two colors, then look back at the chart. Forget even trying to do it for three countries, or for all of them, which is the whole point of the line graph. In forcing the viewer to interpret your legend, your line graph has already dampened much of its explanatory efficiency.

If you have just two data series, a legend isn't egregious, but it is still inferior to removing the legend. Of course, removing the legend isn't enough.

Remove the legend and label the data series directly on the plot.

Unfortunately, here is where your work gets harder, because, much to my disbelief, there is no option in Excel to label data series in place in a line graph. The only automated option is to use the legend.

If I am wrong, and I would love nothing more than to be wrong, please let me know, but I tried going through every one of Excel's various chart menus, which can be brought up by right clicking on different hot spots on the chart, and couldn't find this option. That Excel forces you to right click on so many obscure hot spots just to bring up various options is bad enough. That you can't find this option at all among the dozens of other useless options is a travesty.

The only fix is to create data series labels by hand. You can find an Insert Text Box option somewhere in the Excel menus and ribbons and bars, so I'll make one for each data series and put them roughly in place so you know which data series is which. Next, the moment of truth.

Select the legend. And delete it.

Undo, then delete it again, just to feel the rush as your chart now expands in size to fill the white space left behind. Feels good.

Next, shrink the actual plot area of the graph by selecting it and opening up some margin on the right side of the graph for placing your labels if there isn't enough. Since people read time series left to right, and since the most recent data is on the far right, you'll want your labels to be there, where the viewers' eyes will flow naturally.

Don't move the labels into exact position just yet. First adjust the sizing of the data labels of the axes and the scale of the graph first. Unfortunately, since these text boxes are floating and not attached to the data series, every time the scale of your chart changes, you have to reposition all the data series labels by hand. So do that last.

I have not used this latest version of Excel before, the charting options seem even more complex than before. To change the format of the labels on the x and y-axis, you right click each axis and select Format Axis. I changed the y-axis text format to currency. But to change the size of the labels on each axis, you have to right click each and then select Font. That those are in separate menus is part of the Excel experience.

In expanding the font size of the x-axis, I decided it was too crowded so I went with every other year. I left aligned the data series labels and tried to position them as precisely as possible by eye. I seem to remember Excel used to allow selecting text boxes and moving them one pixel at a time with the arrow keys, but it didn't work for me, so you may have to find the object alignment dropdown somewhere and select align left for all your labels.

Here's the next iteration of the chart.

You can click on it to see it larger. Already, we're better off than the Excel auto-generated chart by quite a margin. If this were the default, I'd be fairly happy. But there's room for improvement.

The use of color can be helpful, especially with lines that are closely stacked, but what about the color blind? If we were to stick with the coloring scheme, I might change the data series labels to match the color of each line. Again, since the labels are added by hand, you'd have to manually change each label by hand to match the color scheme Excel had selected, and again, it wouldn't fix the issue for color blind viewers. [I didn't have the patience to do this for illustrative purposes, but you can see how matching the coloring of labels to the lines helps if you view this data in Google Data Explorer.]

In The Visual Display of Quantitative Information, Tufte uses very little color. When producing the Analytics Package, I was working with black and white printers and copy machines. Color was a no go, even if it provides an added dimension for your graphics, as for elevation on maps.

While color has the advantage of making it easier to distinguish between two lines which are close to each other, it introduces all sorts of mental associations that are difficult to anticipate and which may just be a distraction. When making a chart like, say, one of the U.S. Presidential Election, using blue for Democrats and red for Republicans is a good idea since the color scheme is widely agreed upon. When distinguishing between departments in your company, or product lines, arbitrary color choices can be noise, or worse, a source of contention (NSFW language warning).

The safer alternative is to use different line styles, regardless of whether your final deliverable is capable of displaying color. Depending on how many data series you have to chart, that may or may not be an option. I looked at the data series line format options, which are labeled Dash Type in this version of Excel, and found a total of eight options, or just enough for my example chart. It takes some work to assign options for maximum legibility; you should which country receives which style based on maximum contrast between lines that cluster.

After a random pass at that, the monochrome line graph looked like this.

No issues for color blind users, but we're stretching the limits of line styles past where I'm comfortable. To me, it's somewhat easier with the colored lines above to trace different countries across time versus each other, though this monochrome version isn't terrible. Still, this chart reminds me, in many ways, of the monochromatic look of my old Amazon Analytics Package, though it is missing data labels (wouldn't fit here) and has horizontal gridlines (mine never did).

We're running into some of these tradeoffs because of the sheer number of data series in play. Eight is not just enough, it is probably too many. Past some number of data series, it's often easier and cleaner to display these as a series of small multiples. It all depends on the goal and what you're trying to communicate.

At some point, no set of principles is one size fits all, and as the communicator you have to make some subjective judgments. For example, at Amazon, I knew that Joy wanted to see the data values marked on the graph, whenever they could be displayed. She was that detail-oriented. Once I included data values, gridlines were repetitive, and y-axis labels could be reduced in number as well.

Tufte advocates reducing non-data-ink, within reason, and gridlines are often just that. In some cases, if data values aren't possible to fit onto a line graph, I sometimes include gridlines to allow for easy calculation of the relative ratio of one value to another (simply count gridlines between the values), but that's an edge case.

For sharp changes, like an anomalous reversal in the slope of a line graph, I often inserted a note directly on the graph, to anticipate and head off any viewer questions. For example, in the graph above, if fewer data series were included, but Greece remained, one might wish to explain the decline in health expenditures starting in 2008 by adding a note in the plot area near that data point, noting the beginning of the Greek financial crisis (I don't know if that's the actual cause, but whatever the reason or theory, I'd place it there).

If we had company targets for a specific metric, I'd note those on the chart(s) in question as a labeled asymptote. You can never remind people of goals often enough.

Just as an example, here's another version of that chart, with fewer data series, data labels, no gridlines, fewer y-axis labels. Also, since the lines aren't clustered together, we no longer need different line styles adding visual noise.

At that size, the data values aren't really readable, but if I were making a chart for Joy or Jeff, I'd definitely add the labels because I knew they'd want that level of detail. At Amazon, also, I typically limited our charts to rolling four or eight quarters, so we'd never have this many data points as on the graph above. Again, at some point you have to determine your audience and your goals and modify your chart to match.

Like a movie, work on a chart is a continuous process. I could generate a couple more iterations on the chart above for different purposes, but you get the idea. At some point you have to print it. Just as you'd add the end credits to a film, the last touch here would be to put a source for the data below the graph, so people can follow up on the raw data themselves.

Before I set off on this exercise, I didn't know much about health care expenditures per capita around the world, except that the United States is the world leader by a wide margin. The graph reveals that, and by what magnitude. Look at China by comparison. What explains China's low expenditures? I might hypothesize a number of reasons, including obvious ones like the huge population there, but it would take further investigation, and perhaps more charts. One reason the Analytics Package grew in time was that some charts beget further charts.

Why did Greece's expenditures per capita go into decline starting in 2008. Was it the financial crisis? Why has Japan reversed its upward trajectory starting in 2012? Should we include some other countries for comparison, and how might we choose the most illuminating set?

Every month that first year at Amazon, I'd spend most my waking hours gathering figures and confirming their accuracy, producing these graphs, and then puzzling over the stories behind their contours. The process of making line graphs was prelude to understanding.

To accelerate that understanding, upgrade your line graphs to be efficient and truthful. Some broadly applicable principles should guide you to the right neighborhood. To recap:

  • Don't include a legend; instead, label data series directly in the plot area. Usually labels to the right of the most recent data point are best. Some people argue that a legend is okay if you have more than one data series. My belief is that they're never needed on any well-constructed line graph.
  • Use thousands comma separators to make large figures easier to read
  • Related to that, never include more precision than is needed in data labels. For example, Excel often chooses two decimal places for currency formats, but most line graphs don't need that, and often you can round to 000's or millions to reduce data label size. If you're measuring figures in the billions and trillions, we don't need to see all those zeroes, in fact it makes it harder to read.
  • Format axis labels to match the format of the figures being measured; if it's US dollars, for example, format the labels as currency.
  • Look at the spacing of axis labels and increase the interval if they are too crowded. As Tufte counsels, always reduce non-data-ink as much as possible without losing communicative power.
  • Start your y-axis at zero (assuming you don't have negative values)
  • Try not to have too many data series; five to eight seems the usual limit, depending on how closely the lines cluster. On rare occasion, it's fine to exceed this; sometimes the sheer volume of data series is the point, to show a bunch of lines clustered. These are edge cases for a reason, however.
  • If you have too many data series, consider using small multiples if the situation warrants, for example if the y-axes can match in scale across all the multiples.
  • Respect color blind users and those who may not be able to see your charts with color, for example on a black and white printout, and have options for distinguishing data series beyond color, like line styles. At Amazon, as I dealt with so many figures, I always formatted negative numbers to be red and enclosed in parentheses for those who wouldn't see the figures in color.
  • Include explanations for anomalous events directly on the graph; you may not always be there in person to explain your chart if it travels to other audiences.
  • Always note, usually below the graph, the source for the data.

Some other suggestions which are sometimes applicable:

  • Display actual data values on the graph if people are just going to ask what the figures are anyway, and if they fit cleanly. If you include data labels, gridlines may not be needed. In fact, they may not be needed even if you don't include data labels.
  • Include targets for figures as asymptotes to help audiences see if you're on track to reach them.

Why is The Visual Display of Quantitative Information such a formative text in my life? If it were merely a groundbreaking book on graphic excellence, it would remain one of my trusted references, sitting next to Garner's Modern American Usage, always within arm's reach. It wouldn't be a book I would push on those who never make graphs and charts.

The reason the book influenced me so deeply is that it is actually a book about the pursuit of truth through knowledge. It is ostensibly about producing better charts; what stays with you is the principles for general clarity of thought. Reading the book, chiseling away at my line graphs late nights, talking to people all over the company to understand what might explain each of them, gave me a path towards explaining the past and predicting the future. Ask anyone about any work of art they love, whether it's a book or a movie or an album, and it's never just about what it's about. I haven't read Zen and the Art of Motorcycle Maintenance; I'm guessing it wasn't written just for motorcycle enthusiasts.

A good line graph is a fusion of right and left brain, of literacy and numeracy. Just numbers alone aren't enough to explain the truth, but accurate numbers, represented truthfully, are a check on our anecdotal excesses, confirmation biases, tribal affiliations.

I'm reminded of Tufte's book whenever I brush against tendrils of many movements experiencing a moment online: rationalism, the Nate Silver/538 school of statistics-backed journalism, infographics, UX/UI/graphic design, pop economics, big history. And, much to my dismay, I'm reminded of the book most every time I see a line graph that could use some visual editing. Most people are lazy, most people use the defaults, and the defaults of the most popular charting application on the planet, Excel, are poor.

[Some out there may ask about Apple's Numbers. I tried it a bit, and while it's aesthetically cleaner than Excel, it's such a weak spreadsheet overall that I couldn't make the switch. I dropped Powerpoint for Keynote, though both have some advantages. Neither, unfortunately, includes a great charting tool, though they are simpler in function than the one in Excel. Google Sheets is, like Numbers, a really weak spreadsheet, and it's just plain ugly. If someone out there knows of a superior charting tool, one that doesn't require making charts in Illustrator like Tufte does, please let me know.] 

I love this exchange early on in Batman Begins between Liam Neeson R'as Al Ghul (though he was undercover as Henri Ducard at the time) and Christian Bale's Bruce Wayne.

Bruce Wayne: You're vigilantes.
Henri Ducard: No, no, no. A vigilante is just a man lost in the scramble for his own gratification. He can be destroyed, or locked up. But if you make yourself more than just a man, if you devote yourself to an ideal, and if they can't stop you, then you become something else entirely.
Bruce Wayne: Which is?
Henri Ducard: Legend, Mr. Wayne.

It is absurdly self-serious in the way that nerds love their icons to be treated by mainstream pop culture, and I love it for its broad applicability. I've been known to drop some version of it in meetings all the time, my own Rickroll, but no one seems to find it amusing.

In this case, the passage needs some tweaking. But please do still read it with Liam Neeson's trademark gravitas.

A line graph is just another ugly chart lost in the scramble for its own gratification in a slide deck no one wants to read. It can be disregarded, forgotten. But if you make your graph more than just the default Excel format, if you devote yourself to Tufte's ideals, then your graph becomes something else entirely.

Which is?

A line graph without a legend. Remove the legend, Mr. Wayne, and become a legend.