Why Information Grows

It is hard for us humans to separate information from meaning because we cannot help interpreting messages. We infuse messages with meaning automatically, fooling ourselves to believe that the meaning of a message is carried in the message. But it is not. This is only an illusion. Meaning is derived from context and prior knowledge. Meaning is the interpretation that a knowledge agent, such as a human, gives to a message, but it is different from the physical order that carries the message, and different from the message itself. Meaning emerges when a message reaches a life-form or a machine with the ability to process information; it is not carried in the blots of ink, sound waves, beams of light, or electric pulses that transmit information.
 

From the book Why Information Grows by Cesar Hidalgo. I read this book a long ways back in 2017,  but it's of no less interest now.

And it is the arrow of complexity—the growth of information—that marks the history of our universe and species. Billions of years ago, soon after the Big Bang, our universe did not have the capacity to generate the order that made Boltzmann marvel and which we all take for granted. Since then, our universe has been marching toward disorder, as Boltzmann predicted, but it has also been busy producing pockets that concentrate enormous quantities of physical order, or information. Our planet is a chief example of such a pocket.
 

When one first encounters the second law of thermodynamics, it's easy to tumble into despair at the pointlessness of everything. With the universe fated to collapse into heat death eventually, what is the point of it all?

In this existential void, the presence of pockets of information and order can feel like symbols of rebellion, a raised fist spray painted on a fragment of wall that remains from a bombed-out building. In manifestations of order we see intent, in intent we interpret meaning, and in meaning we find comfort.

Information, when understood in its broad meaning as physical order, is what our economy produces. It is the only thing we produce, whether we are biological cells or manufacturing plants. This is because information is not restricted to messages. It is inherent in all the physical objects we produce: bicycles, buildings, streetlamps, blenders, hair dryers, shoes, chandeliers, harvesting machines, and underwear are all made of information. This is not because they are made of ideas but because they embody physical order. Our world is pregnant with information. It is not an amorphous soup of atoms, but a neatly organized collection of structures, shapes, colors, and correlations. Such ordered structures are the manifestations of information, even when these chunks of physical order lack any meaning.
 

There are plenty of books on information theory, and viewing the universe through the lens of information and computation is increasingly popular, but Hidalgo's book is more readable than most.

To battle disorder and allow information to grow, our universe has a few tricks up its sleeve. These tricks involve out-of-equilibrium systems, the accumulation of information in solids, and the ability of matter to compute.
 
It is the growth of information that unifies the emergence of life with the growth of economies, and the emergence of complexity with the origins of wealth.
 
In twenty-six minutes Iris traveled from the ancientness of her mother’s womb to the modernity of twenty-first-century society. Birth is, in essence, time travel.
 

Birth as time travel is one of those metaphors that, once heard, lodges in your mind like something you always knew. When Arnold Schwarzenegger time travels back from the future to the modern day in The Terminator, he arrives naked, like a newborn.

[It is unclear why a cyborg from the future speaks with a thick Austrian accent, one of the only mysteries I have always hoped would be explained in some throwaway expository joke. My guess is that the voice was a marketing Easter Egg, like celebrity voices in Waze, and someone forgot to flip the Terminator back to its factory default voice before sending it back in time.]

Humans are special animals when it comes to information, because unlike other species, we have developed an enormous ability to encode large volumes of information outside our bodies. Naively, we can think of this information as the information we encode in books, sheet music, audio recordings, and video. Yet for longer than we have been able to write we have been embodying information in artifacts or objects, from arrows to microwave ovens, from stone axes to the physical Internet. So our ability to produce chairs, computers, tablecloths, and wineglasses is a simple answer to the eternal question: what is the difference between us, humans, and all other species? The answer is that we are able to create physical instantiations of the objects we imagine, while other species are stuck with nature’s inventory.
 

Another reason humans wouldn't evolve on a gaseous planet like Jupiter, besides the fact that we'd just burn up, is that without any solids we'd have no way of encoding information to pass on to future generations. Therefore, any advanced civilization in the universe would, it would seem, live in physical conditions that allow for the formation of solids, but not solids that are too rigid.

The temperature band matters. We need solids that are malleable to encode richer sets of information. Add to that the ability to compute, which we see in all forms in our world, down to the cellular level, and suddenly you have life. There is logic to why we look for specific conditions in the universe as precursors for life, and it can be defined more broadly than just looking for water, which is a downstream condition. Further upstream we just want a planet with solids, in a particular band of temperatures.

Such conditions allow living creatures to record and pass along information to the next generation. When humans finally were able to do so, they in effect conquered time. No longer did the knowledge of one generation evaporate into the sinkhole of mortality.

The car’s dollar value evaporated in the crash not because the crash destroyed the atoms that made up the Bugatti but because the crash changed the way in which these were arranged. As the parts that made the Bugatti were pulled apart and twisted, the information that was embodied in the Bugatti was largely destroyed. This is another way of saying that the $2.5 million worth of value was stored not in the car’s atoms but in the way those atoms were arranged. That arrangement is information.
 
...
 
So the value of the Bugatti is connected to physical order, which is information, even though people still debate what information is. According to Claude Shannon, the father of information theory, information is a measure of the minimum volume of communication required to uniquely specify a message. That is, it’s the number of bits we need to communicate an arrangement, like the arrangement of atoms that made the Bugatti.
 
...
 
The group of Bugattis in perfect shape, however, is relatively small, meaning that in the set of all possible rearrangement of atoms—like people moving in a stadium—very few of these involve a Bugatti in perfect condition. The group of Bugatti wrecks, on the other hand, is a configuration with a higher multiplicity of states (higher entropy), and hence a configuration that embodies less information (even though each of these states requires more bits to be communicated). Yet the largest group of all, the one that is equivalent to people sitting randomly in the stadium, is the one describing Bugattis in their “natural” state. This is the state where iron is a mineral ore and aluminum is embedded in bauxite. The destruction of the Bugatti, therefore, is the destruction of information. The creation of the Bugatti, on the other hand, is the embodiment of information.
 

One can separate out the intrinsic value of an item, defined above as the rarity of the state of the configuration of that item, from the external value of an item, as defined by qualities such as symbolic or emotional ones, like nostalgia.

In Pulp Fiction, Bruce Willis risks life and limb to recover a watch given to him by his father. There's no evidence it's a particularly rare watch, he could likely buy another just like it, but its symbolic value to him is extrinsic to the item yet tethered to it the way a genie is trapped in a magic lantern (and that special meaning is conveyed in the now immortal speech by Christopher Walken).

Even the most rational people I know own something that's not physically rare but emotionally rich, a talisman or totem that they use to summon whatever power it holds, whether it be nostalgia or regret or some other enchantment known only to themselves.

What Shannon teaches us is that the amount of information that is embodied in a tweet is equal to the minimum number of yes-or-no questions that Brian needs to ask to guess Abby’s tweet with 100 percent accuracy. But how many questions is that?
 
Shannon’s theory tells us that we need 700 bits, or yes-or-no questions, to communicate a tweet written using a thirty-two-character alphabet. Shannon’s theory is also the basis of modern communication systems.
 

One mathematical reason for the rising usage of emoji in Twitter and other forms of online communication may be that it increases the amount of information that can be encoded in 140 (and now 280) characters.

You'll recall from earlier that the third of the three conditions that allow information to grow is the ability of matter to compute.

To illustrate the prebiotic nature of the ability of matter to process information, we need to consider a more fundamental system. Here is where the chemical systems that fascinated Prigogine come in handy. Consider a set of chemical reactions that takes a set of compounds {I} and transforms them into a set of outputs {O} via a set of intermediate compounds {M}. Now consider feeding this system with a steady flow of {I}. If the flow of {I} is small, then the system will settle into a steady state where the intermediate inputs {M} will be produced and consumed in such a way that their numbers do not fluctuate much. The system will reach a state of equilibrium. In most chemical systems, however, once we crank up the flow of {I} this equilibrium will become unstable, meaning that the steady state of the system will be replaced by two or more stable steady states that are different from the original state of equilibrium.13 When these new steady states emerge, the system will need to “choose” among them, meaning that it will have to move to one or the other, breaking the symmetry of the system and developing a history that is marked by those choices. If we crank up the inflow of the input compounds {I} even further, these new steady states will become unstable and additional new steady states will emerge. This multiplication of steady states can lead these chemical reactions to highly organized states, such as those exhibited by molecular clocks, which are chemical oscillators, compounds that change periodically from one type to another. But does such a simple chemical system have the ability to process information? Now consider that we can push the system to one of these steady states by changing the concentration of inputs {I}. Such a system will be “computing,” since it will be generating outputs that are conditional on the inputs it is ingesting. It would be a chemical transistor. In an awfully crude way this chemical system models a primitive metabolism. In an even cruder way, it is a model of a cell differentiating from one cell type to another—the cell types can be viewed abstractly as the dynamic steady states of these systems, as the complex systems biologist Stuart Kauffman suggested decades ago. Highly interacting out-of-equilibrium systems, whether they are trees reacting to the change of seasons or chemical systems processing information about the inputs they receive, teach us that matter can compute. These systems tell us that computation precedes the origins of life just as much as information does. The chemical changes encoded by these systems are modifying the information encoded in these chemical compounds, and therefore they represent a fundamental form of computation. Life is a consequence of the ability of matter to compute.
 

What's lovely about all of these conditions that allow information to grow is their seeming relevance to individuals and groups of individuals, like corporations or societies or markets.

Humans are concentrated bundles of information with compute power, and when we push ourselves out of equilibrium, we accumulate information. When we crank up our inputs and force ourselves out of our own equilibrium, as we do when we become students, we grow as we restore ourselves to steady state. Whenever anyone complains that they're in a rut, I always counsel them to force themselves out of equilibrium.

***

That covers much of the first half of the book, all fascinating. However, the part of the book that's of broader interest to a business audience is Hidalgo's discussion of the economy as a creator of information.

It's easiest to understand the information creation capacity of an economy by examining its outputs, and the simplest outputs to understand are physical products.

Thinking about products as crystals of imagination tells us that products do not just embody information but also imagination. This is information that we have generated through mental computations and then disembodied by creating an object that mimics the one we had in our head. Edible apples existed before we had a name for them, a price for them, or a market for them. They were present in the world. As a concept, apples were simply imported into our minds. On the other hand, iPhones and iPads are mental exports rather than imports, since they are products that were begotten in our minds before they became part of our world. So the main difference between apples and Apples resides in the source of their physical order rather than in their embodiment of physical order. Both products are packets of information, but only one of them is a crystal of imagination.
 

Like many navel gazers in the tech industry, I'm guilty of stereotyping companies. Apple's strength is integrated hardware and software, Google is the king of machine learning and crunching large data sets, Facebook is the social network to end all social networks, and Amazon is the everything platform.

However, if you haven't worked or been inside any of those companies, it's fairest to judge them as black boxes into which inputs disappear and come out as various outputs, usually products and services like gadgets or websites or applications. Everything else is a mild form of fan fiction. 

By analyzing a company's outputs, one can deduce a great deal about its capabilities. Hidalgo does the same but at the country level.

The idea of crystallized imagination tells us that a country’s export structure carries information about more than just its abundance of capital and labor. A country’s export structure is a fingerprint that tells us about the ability of people in that country to create tangible instantiations of imaginary objects, such as automobiles, espresso machines, subway cars, and motorcycles, and of course about the myriad of specific factors that are needed to create these sophisticated products. In fact, the composition of a country’s exports informs us about the knowledge and knowhow that are embodied in that country’s population.
 

A country that can export a product like an iPhone generally has greater generative power than one that can only export raw materials like bananas. The telltale clues to the economic potential of a country lie not in its imports but its exports.

So what has any of this to do with Chile? The only connection between Chile and the history of electricity comes from the fact that the Atacama Desert is full of copper atoms, which, just like most Chileans, were utterly unaware of the electric dreams that powered the passion of Faraday and Tesla. As the inventions that made these atoms valuable were created, Chile retained the right to hold many of these atoms hostage. Now Chile can make a living out of them. This brings us back to the narrative of exploitation we described earlier. The idea of crystallized imagination should make it clear that Chile is the one exploiting the imagination of Faraday, Tesla, and others, since it was the inventors’ imagination that endowed copper atoms with economic value. But Chile is not the only country that exploits foreign creativity this way. Oil producers like Venezuela and Russia exploit the imagination of Henry Ford, Rudolf Diesel, Gottlieb Daimler, Nicolas Carnot, James Watt, and James Joule by being involved in the commerce of a dark gelatinous goo that was virtually useless until combustion engines were invented. Making a strong distinction between the generation of value and the appropriation of monetary compensation helps us understand the difference between wealth and economic development. In fact, the world has many countries that are rich but still have underdeveloped economies. This is a distinction that we will explore in detail in Part IV. But making this distinction, which comes directly from the idea of crystallized imagination, helps us see that economic development is based not on the ability of a pocket of the economy to consume but on the ability of people to turn their dreams into reality. Economic development is not the ability to buy but the ability to make.
 

At a corporate level, I can recall an age when Sony was the king of consumer electronics the world over. I first coveted a Walkman, then later a Discman. Our family spent its formative years huddled around a giant (at the time) Sony Trinitron TV, and we were the envy of all my friends for owning one. I looked forward to any trip to Japan for a chance to walk the electronics districts to purchase the coolest gadgets on the planet, and for years I owned a Minidisc player model that you couldn't find in the U.S.

And then the world shifted, and the gadget which subsumed all other gadgets was the computer, and as it shrank in size while growing in computational power, the way we interacted with such devices increasingly became software-based. In that competition, the vector which mattered more than anything became software design, a skill Sony had not mastered.

The company that understood both software and hardware design better than any company in the world happened to be located in Silicon Valley, not Japan, and, after a long Wintel interregnum, caused by a number of business factors covered comprehensively elsewhere, Apple's unique skills found themselves in a universe they could really dent. And dent they did.

Thanks especially to the market opportunity created by the smartphone, which it seized with the iPhone, Apple not only surpassed Sony and moved the balance of power in consumer technology across the Pacific Ocean to American shores but became the most valuable company in the entire world.

***

Not all information is easily embodied. For example, for a while I puzzled over what I'll call the Din Tai Fung Paradox.

Din Tai Fung is a restaurant chain, and I visited the original outlet in Taipei decades ago with my mother. They're known for their Shanghainese soup dumplings, made with a very delicate wrap that somehow never breaks and dumps its precious cargo of pork broth until the moment at which you prod it with your chopsticks just so. Some will argue whether Ding Tai Fung is all that and a bucket of chicken, but at a minimum I find the menu to be satisfying comfort food done consistently, in a setting that is usually cleaner and more well-kept than your average chain restaurant outlet. You'll find superior deals from a street vendor and more elaborate preparation at a higher-end restaurant, but Din Tai Fung industrializes and scales a Chinese staple. We don't pay enough attention to scale.

The mystery is why Din Tai Fung has opened so few outlets; they've only dropped locations in a handful of cities in about ten countries in the world, and every Din Tai Fung is packed solid from open to close with the type of ever-present line of humans snaked outside the front door that you so rarely see at any restaurant, let alone a chain.

For a few months, a new outlet was rumored to be opening in San Francisco soon, and among my friends it was as momentous a rumor as if a new Star Wars teaser trailer had dropped. Ultimately, one opened in the Bay Area, but in Santa Clara instead of San Francisco. 

Which leads to a further mystery: why haven't any competing chains opened up to make the same items to fill the market void? I would never open a restaurant, but my family knows I'd make an exception if I were granted the opportunity to open a branch of Din Tai Fung anywhere. I bring it up every family gathering, when there's a lull in the conversation. Forget cryptocurrency, I want to mint me some Din Tai Fung coin.

At every Din Tai Fung I've been to, they have a glass window so you can look into the kitchen to see the soup dumplings being wrapped, always by kitchen staff wearing white uniforms, almost like lab assistants, an impression magnified by the branches that require face masks. It's rumored that the branches in Asia try to hire the tallest, most attractive men to man the soup dumpling assembly line, but it sounds about as true as a lot of things my aunts and uncles tell me, which is to say it's more credible than I'd care to admit.

The hermetic vibe behind the glass is as far from the vendor selling goods from a street cart as possible; some find street food charming but if you're taking this food to a global audience it needs to be sanitized or sterilized, the same way movies for the Chinese market strip out any storylines that might offend. It's not just the front of the house that's immaculate, the show behind the glass display says they have nothing to hide. It's the equivalent of the blackjack dealer at a casino clapping and turning their hands one way and the other before moving to the next table.

More interesting to me was that Din Tai Fung even doesn't even bother to hide the process behind its staple dish, the evidence is on thousands of smart phones by this point, everyone seems to stop to take a photo or video of the assembly line while waiting for their table.

And yet any Chinese food fan knows it's notoriously hard to find a good soup dumpling. In this age where recipes for almost anything are available online for free, why can't you find a good soup dumpling in most major cities in the world? Or, for that matter, a good burrito, or any dish you love? Why are these crystals of imagination so unevenly distributed when the recipes for making them so broadly available?

The answer, as any home chef who has tried to make a dish from some highfalutin cookbook knows, is simple: you can have the most precise ingredient list and directions and still struggle to make anything approaching what you ate, whether it came from a $400 tasting menu or a mainstream cookbook. Cooking is not nearly as deterministic as the term recipe implies.

Slight variations in environment, weather, ingredients, and cookware can lead to massive differences in the final product. Your oven may say 400 degrees, but the actual temperature inside, at the precise spot where you've placed your baking dish, may be different. That celery you use for your mirepoix today may not be as fresh as the celery you used last week. The air pressure where you're cooking on a particular day may differ from that where you live, the bacteria in the air may also vary. Great chefs appear on Top Chef and flail making dishes they've made hundreds of times in their own restaurant kitchen because every bit of environmental variation matters.

We may glamorize the image of the genius, heroic chef, working magic to create a delicious and beautifully plated dish that a waiter places before us with a balletic flourish, but the true value creation in a restaurant comes from translating that moment of genius into a rote, repeatable cycle. The popularity of sous vide as a cooking technique, even at high end restaurants, comes down its repeatable precision and accuracy. Ask any chef and they'll tell you the value of a line chef who can cook dozens of proteins to the right level of doneness every time given the high cost of fish and meat.

In addition to all those conditionals, much of cooking skill comes down to learned muscle memory and pattern recognition that can only be encoded in a human being through repeated trial and error. I tried to learn some of my favorite of my mother and grandmother's dishes by writing down recipes they dictated to me, but much was lost in translation. Like so much maternal magic, it could only be learned, truly, at their side, with an apron on, watching, imitating, botching one dish after another, until some of it seeped into my bones.

In a memorable segment from the documentary Jiro Dreams of Sushi, apprentice Daisuke Nakazawa is assigned the job of making egg sushi, or tamago. He believes it will be simple, but again and again, Jiro rejects his work. Nakazawa ends up making over 200 rejected samples until finally, one day, Jiro approves. Nakazawa cries in relief and joy.

***

Hardware and software are not like cooking. When knowledge and instructions can be encoded in bits, a level of precision is possible that is effectively, for the purposes of this discussion, deterministic. Manufacturing a hundred million iPhones is like food production, but not the type done in high end or home kitchens. Instead, it is more like producing a hundred million Oreos.

There is one country in the world where that many iPhones can be manufactured for the cost that allows Apple to reap its insane profits: China. I can't think of any other country in the world, not India or Mexico or the United States, or all of Europe together that can make that many iPhones for that price to meet the market demand year after year. Some countries have the labor but not the skills, others have the skills but not enough labor, and others just can't do the work as cheaply as China can.

Recall that the potential of an economy can be judged by the complexity of its exports. Based on that, it's difficult to imagine an economy outside of the U.S. with more potential than China. Some of the most complex products in the world, and the iPhone deserves to be on that list, are made in China.

I've backed many a Kickstarter hardware project, and without fail, every one has been made in China, usually Shenzhen. Inevitably, when the products are delayed, the project's creators will send an update with some photos of a few of them in China, at some plant, examining some part that will get the project back on track, or with their arms around a few Chinese plant managers giving a thumbs up sign.

Kickstarter often feels like an industrial and software design and marketing operation layer grafted on top of the manufacturing capabilities of Shenzhen. It is an early warning indicator of China's economic potential, and the gap that remains to realizing it.

Here is another. Foxconn assembles iPhones for Apple, and for their efforts they make anywhere from $8 to $30 per iPhone, depending on what article you believe. Whatever the figure, we know it is not far off from that.

Apple, in contrast, makes hundreds of dollars per iPhone. They earn that premium, many multiples what Foxconn earns, by virtue of being the ones who designed every aspect of the phone, from the software to the hardware. China can supply labor and even sometimes components, but the crystal of imagination that is the iPhone, perhaps the most valuable such crystal in the history of the world, comes almost entirely from the imagination of employees of Apple. Foxconn is one cog in a long supply chain, and that link isn't the one made of gold.

However, to even have the capability of making an iPhone for less than the cost of a lunch in San Francisco is a skill, one China has shown again and again. Many another country wishes it had such a demonstrated skill. Were China ever able to gain some of the software and industrial design skills of a company like Apple, they would be even more of an economic powerhouse than they are now.

That's a massive conditional. It's not something that can be learned by mere handwaving or even sheer industriousness. After all, Sony could return to its former glory, or Samsung be even more dominant globally, if software design skills were so easily learned.

Someone someday will write a book of the history of software design and how it came to be that Apple built up that capability more than any other technology company, and I'll be among its most eager readers because it's an untold story that holds the key to one of the greatest value creation stories in the history of business.

...our world is one in which knowledge and knowhow are “heavier” than the atoms we use to embody their practical uses. Information can be moved around easily in the products that contain it, whether these are objects, books, or webpages, but knowledge and knowhow are trapped in the bodies of people and the networks that these people form. Knowledge and knowhow are so “heavy” that when it comes to a simple product such as a cellphone battery, it is infinitely easier to bring the lithium atoms that lie dormant in the Atacama Desert to Korea than to bring the knowledge of lithium batteries that resides in Korean scientists to the bodies of the miners who populate the Atacaman cities of Antofagasta and Calama. Our world is marked by great international differences in countries’ ability to crystallize imagination. These differences emerge because countries differ in the knowledge and knowhow that are embodied in their populations, and because accumulating knowledge and knowhow in people is difficult. But why is it hard for us to accumulate the knowledge and knowhow we need to transform our dreams into reality?
 

If knowledge were so easy to transfer, I'd be a three-star Michelin Chef because someone gifted me a copy of the Eleven Madison Park cookbook.

Getting knowledge inside a human’s nervous system is not easy because learning is both experiential and social.5 To say that learning is social means that people learn from people: children learn from their parents and employees learn from their coworkers (I hope). The social nature of learning makes the accumulation of knowledge and knowhow geographically biased. People learn from people, and it is easier for people to learn from others who are experienced in the tasks they want to learn than from people with no relevant experience in that task. For instance, it is difficult to become an air traffic controller without learning the trade from other air traffic controllers, just as it is difficult to become a surgeon without having ever been an intern or a resident at a hospital. By the same token, it is hard to accumulate the knowhow needed to manufacture rubber tires or an electric circuit without interacting with people who have made tires or circuits.6 Ultimately, the experiential and social nature of learning not only limits the knowledge and knowhow that individuals can achieve but also biases the accumulation of knowledge and knowhow toward what is already available in the places where these individuals reside. This implies that the accumulation of knowledge and knowhow is geographically biased.
 

What governs the information production capacity of a country? Hidalgo coins two terms to analyze this problem. One is the personbyte.

We can simplify this discussion by defining the maximum amount of knowledge and knowhow that a human nervous system can accumulate as a fundamental unit of measurement. We call this unit a personbyte, and define it as the maximum knowledge and knowhow carrying capacity of a human.
 

The other term is firmbyte.

The limited proliferation of megafactories like the Rouge implies that there must be mechanisms that limit the size of the networks we call firms and make it preferable to disaggregate production into networks of firms. This also suggests the existence of a second quantization limit, which we will call the firmbyte. It is analogous to the personbyte, but instead of requiring the distribution of knowledge and knowhow among people, it requires them to be distributed among a network of firms.
 

Hidalgo then delves a bit into Coase's transaction cost theory of the firm. Traditionally, Coase's theory is used as a way to explain why firms are fundamentally limited in their size, the idea being that at some size, external transactions become cheaper than internal coordination costs and so it's more efficient to just transact externally rather than produce internally.

I'm not interested in examining that topic now. Instead, let's assume that firms all do have some asymptote in size beyond which Coase's anchor becomes too heavy. The interesting implication is that given the existence of a ceiling on the size of the firmbyte, if some chunk of knowledge exceeds that capacity then it can only be carried by a network of firms.

It's long been said that the center of the technology universe shifted from Boston's route 128 to Silicon Valley because California banned non-competes (here's one study). Hidalgo's theory of the finite compute ability of a network of humans and firms explains how this works. The free movement of employees in Silicon Valley allows the region's knowledge-carrying capacity to increase at the expense of any single firm's benefit. Per Coase, the cost of information movement in Silicon Valley, as embodied by an employee carrying a personbyte from one firm to the next, is lower than it was in the route 128 corridor.

Let's telescope back out to the country level. What applies at the regional or industry level holds at the country level. A country's knowledge carrying capacity, and thus its information production power, is influenced in part by the size of networks it can form.

In his 1995 book Trust, he [Francis Fukuyama] argues that the ability of a society to form large networks is largely a reflection of that society’s level of trust. Fukuyama makes a strong distinction between what he calls “familial” societies, like those of southern Europe and Latin America, and “high-trust” societies, like those of Germany, the United States, and Japan.
 
Familial societies are societies where people don’t trust strangers but do trust deeply the individuals in their own families (the Italian Mafia being a cartoon example of a familial society). In familial societies family networks are the dominant form of social organization where economic activity is embedded, and are therefore societies where businesses are more likely to be ventures among relatives. By contrast, in high-trust societies people don’t have a strong preference for trusting their kin and are more likely to develop firms that are professionally run. Familial societies and high-trust societies differ not only in the composition of the networks they form—as in kin and non-kin—but also in the size of the networks they can form. This is because the professionally run businesses that evolve in high-trust societies are more likely to result in networks of all sizes, including large ones. In contrast, familial societies are characterized by a large number of small businesses and a few dominant families controlling a few large conglomerates.
 
Yet, as we have argued before, the size of networks matters, since it helps determine the economic activities that take place in a location. Larger networks are needed to produce products of higher complexity and, in turn, for societies to achieve higher levels of prosperity. So according to Fukuyama, the presence of industries of different sizes indicates the presence of trust. In his own words: “Industrial structure tells an intriguing story about a country’s culture. Societies that have very strong families but relatively weak bonds of trust among people unrelated to one another will tend to be dominated by small, family-owned and managed business. On the other hand, countries that have vigorous private nonprofit organizations like schools, hospitals, churches, and charities, are also likely to develop strong private economic institutions that go beyond the family.”
 

In Tyler Cowen's conversation with economist Luigi Zingales, the latter hints at the limitations of familial economies in humorous fashion:

One friend of mine was saying that the demise of the Italian firm family structure is the demise of the Italian family. In essence, when you used to have seven kids, one out of seven in the family was smart. You could find him. You could transfer the business within the family with a little bit of meritocracy and selection.
 
When you’re down to one or two kids, the chance that one is an idiot is pretty large. The result is that you can’t really transfer the business within the family. The biggest problem of Italy is actually fertility, in my view, because we don’t have enough kids. If you don’t have enough kids, you don’t have enough people to transfer. You don’t have enough young people to be dynamic.
 
The Italian culture has a lot of defects, but the entrepreneurship culture was there, has been there, and it still is there, but we don’t have enough young people.
 

Low fertility's impact on economies is an issue globally, for example in Japan, but low trust outside of family is an even broader constraint on the knowledge carrying capacity of an economy. If you can't form as large a firm as another country, you can't compete in some businesses and the information producing capability of your economy has a lower ceiling.

If you run a company, you're no doubt familiar with the efficiency gains that arise when different employees and departments operate with high trust. Links form easily given an assumption of low risk, and knowledge moves more quickly, fluidly. Networks then facilitate trust in a virtuous cycle, an example being the military as an integrating institution in a multi-ethnic society.

Trust based on family has its own advantages, but for now I'm focused on an economy's ceiling, and networks that throw off the shackles of family-based firms can scale more. China not only has the population to supply a workforce that can assemble 100 million iPhones in a year, it has an economy that has moved beyond any roots in family-based trust.

Hidalgo's theory also explains why we don't see geographic leakage in industry know-how. Why aren't there Silicon Valleys everywhere?

The personbyte theory can also help us explain why large chunks of knowledge and knowhow are hard to accumulate and transfer, and why knowledge and knowhow are organized in the hierarchical pattern that is expressed in the nestedness of the industry-location data. This is because large chunks of knowledge and knowhow need large networks of people to be embodied in, and transferring or duplicating large networks is not as easy as transferring a small group of people. As a result, industry-location networks are nested, and countries move to products that are close by in the product space.
 

When the knowledge required to create something like an iPhone or a Hollywood film require the interaction of multiple people, with all their accumulated knowledge, seizing it for yourself isn't as easy as poaching one employee or sprinting off with a burning branch to give fire to mankind like Prometheus. Thus we understand why, besides its weather, LA has such a grip on filmmaking for the global market, why any handset manufacturers can't just reverse engineer an iPhone, and why, despite having hundreds of millions of users for iMessages, Apple isn't a credible threat in social networking.

When I study the Chinese tech market, I see an incredibly high ceiling. In fact, the Chinese consumer market in tech is more dynamic now than its counterpart in Silicon Valley. Once, China was belittled for simply copying all the US tech companies. It's true, there is a Chinese Bizarro instance of all successful U.S. tech companies, a Chinese Google, Facebook, Amazon, Twitter, Instagram, YouTube, and so on.

Thanks to that complex interaction of culture and technology, however, China now creates companies with no real American equivalents, and that extends beyond WeChat. China also has more dense cities than America, and density creates its own unique consumer technology opportunities. You'll run out of fingers and toes before you come to a Chinese city as small as New York City, and that matters when many social products piggyback and rely on metropolitan densities as dry kindling.

The competition between tech companies in the U.S. draws scandalized chatter from the peanut gallery, but the pace at which something like Snapchat Stories was copied in the U.S. would be seen as laughably slow in China. Not only are features of competitors routinely copied within a week or two in China, employees are poached all the time in what is closer to a true approximation of a free labor market than even Silicon Valley. Knowledge moves quickly, freely.

Three things, in my observation, hold Chinese companies back from capturing more share in the international market, outside Chinese borders. Two are related, and those are an internationally appealing industrial and software design aesthetic.

It's true, people who find many Chinese software UI's to be busy and crowded can't read Chinese and thus don't understand their localized appeal to the Chinese market. As eye-tracking studies have shown (example), Chinese users scan pages differently, and why shouldn't they considering the fundamental differences between an alphabetic language like English and a pictographic one like Chinese?

Still, most of the international market can't read Chinese. In my past work with UI designers in China, I find it takes more prompting to arrive at something more broadly intuitive for, say, an American market.

The same goes for industrial design, where, akin to the denser informational aesthetic of Chinese software, a somewhat more maximalist impulse takes hold. It's still quite common to walk into an Asian electronics superstore and see display signage that lists dozens of bullet points of features in selling a product. Contrast that to the almost non-existent signage in an Apple store for the extreme opposite.

A more tangible example is something like the user interface of everyone's favorite cooking gadget, the Instant Pot. I received one as a gift last year, and no doubt, I think it's a real value at $80 or so for the base model. For how harried we all feel, a pressure cooker is way more useful a kitchen gadget than most.

However, this is the instrument panel on the front of the Instant Pot.

In practice, it's even more confusing than it appears on first glance. I won't delve into it here, but given a simple design pass, the entire UI could be made much less intimidating, much more intuitive. Given the functionality of any pressure cooker, the functionality can be reduced to a much simpler instrumentation.

These two skill gaps in software and industrial design allow for a continued Kickstarter arbitrage opportunity that slaps a more internationally appealing software and industrial design, along with the more internationally appealing marketing, on top of Shenzhen's manufacturing capabilities.

The last thing holding back more Chinese startups, in my experience, is a shortage in the professional management class. I know, I know, MBA's get a bad rap in the domestic market, but having many CEO's with engineering background at so many Chinese tech companies comes with its own drawbacks.

This management gap may be related to the style of org structure and management which others have mentioned to me as less conducive to certain types of innovation, though it's harder for me to assess without having worked inside a Chinese company.

None of this needs to matter since the Chinese market is so massive. Chinese startups can succeed wildly without ever making a peep outside their home territory. Besides, how a design aesthetic and process can seep into a country's soul remains a mystery to me, but my guess is it's about as slow-moving as trying to produce a high quality soup dumpling in a new market.

Still, I love to muse on the potential of China. In fact, there is one Chinese company that best exemplifies the potential of the country's tech market on the international stage. Last summer, a friend of mine who had worked at this company heard I was in the market for a drone and referred me to a friend who was selling an extra, lightly used Mavic Pro which he'd purchased after he thought he'd lost his original.

I don't know the first thing about flying drones, but it took me all of fifteen minutes or so to get the thing up and flying around in the air, capturing 4K video. It is an fantastic feat of engineering, probably still the single drone I'd recommend to anyone looking to get into drone photography (though I recommend getting a bundle with some extra batteries and a carrying case).

DJI had a few advantages in surging to its undisputed leadership position in the global drone market. First, this is a product category where how well the product actually works is more complex a task than in others. Many drones just don't fly that well. Being an engineering-led company is a strength here, and as long as the industrial design is optimized for flight, it doesn't really matter if your product isn't the sleekest. You won't care what it looks like when it's several hundred feet up in the air.

Second, from a software design perspective, drone UI design can piggyback on flight UI templates that have been worked out over the years. One reason I was up and running so quickly with my Mavic Pro is that the flight sticks imitate video game flight controls. The UI isn't quite as simple as I'd like, but I was fluent much more quickly than the hefty page count of the instruction manual implied.

Estimates of DJI's market share vary, but they are all well north of 50%, and most of its competitors have either left the market entirely or are struggling to stay aloft, so to speak. Here is a vertically integrated Chinese company that most definitely makes more than the cost of cheap SF lunch that Foxconn makes for each iPhone it assembles.

Now, making drones and building smartphones or writing apps are not the same skills. Drones, as exciting as they are, still aren't the type of thing I'd recommend except to photography enthusiasts. And China is a long way from dominating the consumer electronics market internationally with a massive portfolio of products from domestic, vertically integrated companies.

But the ceiling at least exists. it's not theoretical. It's more than any other country outside the U.S. can say, and how China answers it is one of the questions which will determine the relative economic power of China versus the United States in this century.

***

That China can export drones more easily than it can import, say, the software and industrial design know-how of a company like Apple is, at a higher level, a fundamental question of how we can pass along knowledge of any sort. Why are we not better at transferring know-how to industrial workers who are out of a job, why hasn't the internet produced a global leveling of industrial know-how at the country level?

Hidalgo notes:

At a finer scale, economies still lack the intimate connection between knowhow and information that is embodied in DNA and which allows biological organisms to pack knowhow so tightly. A book on engineering, art, or music can help develop an engineer, artist, or musician, but it can hardly do so with the elegance, grace, and efficiency with which a giraffe embryo unpacks its DNA to build a giraffe. The ability of economies to pack and unpack knowhow by physically embodying instructions and templates as written information is much more limited than the equivalent ability in biological organisms.
 

We are nowhere near our maximum throughput for passing on our knowledge to our fellow man, let alone across the membrane between companies and economies. In The Matrix, with a few seconds of fluttering eyelids, Keanu Reeves downloads an entire martial art into his self.

Give a man a kung-fu, you making him Neo. Teach a man to kung-fu, you make him John Wick. 

That is the dream. Asky any parent who is in the midst of trying to get a three-year-old to eat their dinner without throwing half of it on the ground and they'll nod in agreement. What is our version of nature's DNA and cell school of knowledge compression and decompression?

One of the reality TV shows which I wish existed would be one in which a variety of masters in their field compete to take absolute novices from a standing start as far as possible in a finite period of time. Instead of Top Chef, in which contestants are all successful chefs already, I want three master chefs to have to each train a handful of complete cooking dunces over a several month process, and the teacher and winning student share the pot.

Each season could have variant skills. In another, maybe the world's top three piano teachers have to train people who've never played the piano in their lives to sight-reading Chopin. Bill Belichick and Nick Saban coach two youth league football teams to see which wins a season finale scrimmage. I'm sure some of you will write me to tell me some version of a show like this exists already, but I've seen some that come close, but almost all the ones I've seen spend much too little time on the actual instruction methodology and process, and that's where all the mystery and interest lies.

In future posts, I'll delve into some of the limitations I've observed in how we pass information among people, companies, and economies, and from one generation to the next. For now, I recommend picking up Hidalgo's book, and I hope to hear from you about some of the ways you've found to help grow information around you in more efficient ways.

My most popular posts

I recently started collecting email addresses using MailChimp for those readers who want to receive email updates when I post here. Given my relatively low frequency of posts these days, especially compared to my heyday when I posted almost daily, and given the death of RSS, such an email list may have more value than it once did. You can sign up for that list from my About page.

I've yet to send an email to the list successfully yet, but let's hope this post will be the first to go out that route. Given this would be the first post to that list, with perhaps some new readers, I thought it would be worth compiling some of my more popular posts in one place.

Determining what those are proved difficult, however. I never checked my analytics before, since this is just a hobby, and I realized when I went to the popular content panel on Squarespace that their data only goes back a month. I also don't have data from the Blogger or Movable Type eras of my blog stashed anywhere, and I never hooked up Google Analytics here.

A month's worth of data was better than nothing, as some of the more popular posts still get a noticeable flow of traffic each month, at least by my modest standards. I also ran a search on Twitter for my URL and used that as a proxy for social media popularity of my posts (and in the process, found some mentions I'd never seen before since they didn't include my Twitter handle; is there a way on Twitter to get a notification every time your domain is referenced?).

In compiling the list, I went back and reread these posts for the first time in ages added a few thoughts on each.

  • Compress to Impress — my most recent post is the one that probably attracted most of the recent subscribers to my mailing list. I regret not including one of the most famous cinematic examples of rhetorical compression, from The Social Network, when Justin Timberlake's Sean Parker tells Jesse Eisenberg, "Drop the "The." Just Facebook. It's cleaner." Like much of the movie, probably made up (and also, why wasn't the movie titled just Social Network?), but still a good example how movies almost always compress the information to be visually compact scenes. The reason people tend to like the book better than the movie adaptation in almost every case is that, like Jeff Bezos and his dislike of Powerpoint, people who see both original and compressed information flows feel condescended and lied to by the latter. On the other hand, I could only make it through one and a half of the Game of Thrones novels so I much prefer the TV show's compression of that story, even as I watch every episode with super fans who can spend hours explaining what I've missed, so it feels like I have read the books after all.
  • Amazon, Apple, and the beauty of low margins — one of the great things about Apple is it attracts many strong, independent critics online (one of my favorites being John Siracusa). The other of the FAMGA tech giants (Facebook, Amazon, Microsoft, Google) don't seem to have as many dedicated fans/analysts/critics online. Perhaps it was that void that helped this post on Amazon from 2012 to go broad (again, by my modest standards). Being able to operate with low margins is not, in and of itself, enough to be a moat. Anyone can lower their prices, and more generally, any company should be wary of imitating any company's high variance strategy, lest they forget all the others who did and went extinct (i.e., a unicorn is a unicorn because it's a unicorn, right?). Being able to operate with low margins with unparalleled operational efficiency, at massive scale globally, while delivering more SKUs in more shipments with more reliability and greater speed than any other retailer is a competitive moat. Not much has changed, by the way. Apple just entered the home voice-controlled speaker market with its announcement of the HomePod and is coming in from above, as expected, at $349, as the room under Amazon's price umbrella isn't attractive.
  • Amazon and the profitless business model fallacy — the second of my posts on Amazon to get a traffic spike. It's amusing to read some of the user comments on this piece and recall a time when every time I said anything positive about Amazon I'd be inundated with comments from Amazon shorts and haters. Which is the point of the post, that people outside of Amazon really misunderstood the business model. The skeptics have largely quieted down nowadays, and maybe the shorts lost so much money that they finally went in search of weaker prey, but in some ways I don't blame the naysayers. Much of their misreading of Amazon is the result of GAAP rules which really don't reveal enough to discern how much of a company's losses are due to investments in future businesses or just aggressive depreciation of assets. GAAP rules leave a lot of wiggle room to manipulate your numbers to mask underlying profitability, especially when you have a broad portfolio of businesses munged together into single line items on the income statement and balance sheet. This doesn't absolve professional analysts who should know better than to ignore unit economics, however. Deep economic analysis isn't a strength of your typical tech beat reporter, which may explain the rise of tech pundits who can fill that gap. I concluded the post by saying that Amazon's string of quarterly losses at the time should worry its competitors more than it should assure them. That seems to have come to fruition. Amazon went through a long transition period from having a few very large fulfillment centers to having many many more smaller ones distributed more broadly, but generally located near major metropolitan areas, to improve its ability to ship to customers more quickly and cheaply. Now that the shift has been completed for much of the U.S., you're seeing the power of the fully operational Death Star, or many tiny ones, so to speak.
  • Facebook hosting doesn't change things, the world already changed — the title feels clunky, but the analysis still holds up. I got beat up by some journalists over this piece for offering a banal recommendation for their malady (focus on offering differentiated content), but if the problem were so tractable it wouldn't be a problem.
  • The network's the thing — this is from 2015, and two things come to mind since I wrote it.
    • As back then, Instagram has continued to evolve and grow, and Twitter largely has not and has not. Twitter did stop counting user handles against character limits and tried to alter its conversation UI to be more comprehensible, but the UI's still inscrutable to most. The biggest change, to an algorithmic rather than reverse chronological timeline, was an improvement, but of course Instagram had beat them to that move as well. The broader point is still that the strength of any network lies most in the composition of its network, and in that, Twitter and other networks that have seened flattening growth, like Snapchat or Pinterest, can take solace. Twitter is the social network for infovores like journalists, technorati, academics, and intellectual introverts, and that's a unique and influential group. Snapchat has great market share among U.S. millennials and teens, Pinterest among women. It may be hard for them to break out of those audiences, but those are wonderfully differentiated audiences, and it's also not easy for a giant like Facebook to cater to particular audiences when its network is so massive. Network scaling requires that a network reduce the surface area of its network to each individual user using strategies like algorithmic timelines, graph subdivision (e.g., subreddits), and personalization, otherwise networks run into reverse economies of scale in their user experience.
    • The other point that this post recalls is the danger of relying on any feature as a network moat. People give Instagram, Messenger, FB, and WhatsApp grief for copying Stories from Snapchat, but if any social network has to pin its future on any single feature, all of which are trivial to replicate in this software age, that company has a dim future. The differentiator for a network is how its network uses a features to strengthen the bonds of that network, not the feature itself. Be wary of hanging your hat on an overnight success of a feature the same way predators should be wary of mutations that offer temporary advantages over their prey. The Red Queen effect is real and relentless.
  • Tower of Babel — From earlier this year, and written at a time when I was quite depressed about a reversal in the quality of discourse online, and how the promise of connecting everyone via the internet had quickly seemed to lead us all into a local maximum (minimum?) of public interaction. I'm still bullish on the future, but when the utopian dreams of global connection run into the reality of human's coalitional instincts and the resentment from global inequality, we've seen which is the more immovable object. Perhaps nothing expresses the state of modern discourse like waking up to see so many of my followers posting snarky responses to one of Trump's tweets. Feels good, accomplishes nothing, let's all settle for the catharsis of value signaling. I've been guilty of this, and we can do better.
  • Thermodynamic theory of evolution — actually, this isn't one of my most popular posts, but I'm obsessed with the second law of thermodynamics and exceptions to it in the universe. Modeling the world as information feels like something from the Matrix but it has reinvigorated my interest in the physical universe.
  • Cuisine and empire — on the elevation of food as scarce cultural signal over music. I'll always remember this post because Tyler Cowen linked to it from Marginal Revolution. Signalling theory is perhaps one of the three most influential ideas to have changed my thinking in the past decade. I would not underestimate its explanatory power in the rise of Tesla. Elon Musk and team made the first car that allowed wealthy people to signal their environmental values without having to also send a conflicting signal about their taste in cars. It's one example where actually driving one of the uglier, less expensive EV's probably would send the stronger signal, whereas generally the more expensive and useless a signal the more effective it is.
  • Your site has a self-describing cadence — I'm fond of this one, though Hunter Walk has done so much more to point to this post than anyone that I feel like I should grant him a perpetual license to call it his own. It still holds true, almost every service and product I use online trains me how often to return. The only unpleasant part of rereading this is realizing how my low posting frequency has likely trained my readers to never visit my blog anymore.
  • Learning curves sloping up and down — probably ranks highly only because I have such a short window of data from Squarespace to examine, but I do think that companies built for the long run have to come to maintain a sense of the slope of their organization's learning curve all the time, especially in technology where the pace of evolution and thus the frequency of existential decisions is heightened.
  • The paradox of loss aversion — more tech markets than ever are winner-take-all because the internet is the most powerful and scalable multiplier of network effects in the history of the world. Optimal strategy in winner-take-all contests differs quite a bit from much conventional business strategy, so best recognize when you're playing in one.
  • Federer and the Paradox of Skill — the paradox of skill is a term I first learned from Michael Mauboussin's great book The Success Equation. This post applied it to Roger Federer, and if he seems more at peace recently, now that he's older and more evenly matched in skill to other top players, it may be that he no longer feels subject to the outsized influence of luck as he did when he was a better player. In Silicon Valley, with all its high achieving, brilliant people, understanding the paradox of skill may be essential to feeling jealous of every random person around you who fell into a pool of money. The Paradox of Skill is a cousin to The Red Queen effect, which I referenced above and which tech workers of the Bay Area should familiarize themselves with. It explains so much of the tech sector but also just living in the Bay Area. Every week I get a Curbed newsletter, and it always has a post titled "What $X will get you in San Francisco" with a walkthrough of a recent listing that you could afford on that amount of monthly rent. Over time they've had to elevate the dollar amount just to keep things interesting, or perhaps because what $2900 can rent in you in SF was depressing its readers.

Having had this blog going off and on since 2001, I only skimmed through through a fraction of the archives, but perhaps at some point I'll cringe and crawl back further to find other pieces that still seem relevant.

Acoustic pest detection

The U.S. Grain Inspection Service, Packers, and Stockyard Admininstration’s (GISPSA) standard quality assessment method involves sieving and visually inspecting a one kilogram sample: their guidelines “consider grains infested if the representative sample contains two or more live weevils, or one live weevil and one or more other live insects injurious to stored grain, or two or more live insects injurious to stored grain.”
 
However, since the larvae of many stored product pests grow inside grain kernels, where, Fleurat-Lessard notes, their “population density may be ten times more numerous than free-living adults,” a visually-inspected“clean” sample may actually be completely infested with rice weevil larvae. To look inside grains, laboratories use X-rays or resonance spectroscopy, but these techniques are too expensive and impractical to deploy in bulk grain lots.
 
But while rice weevil larvae are invisible, they are not inaudible: the “mean sound pressure” of rice weevil larvae feeding inside a wheat kernel is 23 dB, according to the USDA Agricultural Research Service. The idea, then, is that if you could somehow design sensitive-enough acoustic probes, combined with software to match the probes’ input against a database of field recordings, you might be able to monitor insect activity in stored grain automatically and detect infestations at the larval stage.
 

I had no clue such a thing as acoustic pest detection existed. Amazing.

Building a sound library of stored food insects was equally important – the field recordings on that Insect Noise in Stored Foodstuffs CD actually form the core of current acoustic pest detection databases. Years of research have gone into classifying the characteristic sonic signatures of different pest species at different stages in their lifecycles, to the point that a computer can now compare input from a grain silo’s acoustic sensor system against a library field recordings and tell you whether the rice weevil larvae eating your wheat kernels are sixteen or eighteen days old.
 

The smartphone is a form of human augmentation, the latest version of the “bicycle for the brain” metaphor from Steve Jobs or whoever it originated with. I'm looking forward to more sensory augmentation in compact form factors in my lifetime. The ability to increase the sensitivity of my hearing and have it plug into a database of sounds for enhanced recognition would open up a whole new world. Camping would never be the same again.

How to spot a food critic

From an interview with Ruth Reichl.

There are very few food critics you can’t find a picture of. Go on the Internet and circulate them to your staff. I went out with a big deal critic this week, and I was kind of stunned that he was not recognized in the restaurant we went to. His picture is everywhere.
 
There are a few really obvious things. If you’ve got a four-top and you’ve got four different appetizers, four different main courses, and four different desserts, probably you’ve got a food critic there. Usually two people will want the Steak Diane, or something.
 
If people are passing the plates around, see who they’re going to. Every critic has to taste everything on the table. A sharp-eyed waiter will notice people are actually passing plates around or passing a bread plate with a taste to one person at the table.
 
I never got up and went to the bathroom, because it was another opportunity for someone who is not my waiter to spot me. But most critics go to the bathroom to take notes. If someone is going at the end of every course, it may be a giveaway that it’s a critic.
 
Another sign is if you see someone coming in twice at a time when no one wants to eat. When you’re a critic, if you can go at 5:30 you go at 5:30, and if you can go at 10:30 you go at 10:30. If you see the same person come in twice at 5:30 or 10:30, take a look at them. Why are they agreeing to come at odd times? Especially early reservations are a good giveaway.
 

All of us who aren't restaurant critics can read this as "how to impersonate a food critic."

However, in this day and age of Yelp, the real answer to the question of how to spot a food critic is to open your eyes. Everyone's a critic. It's no longer necessary to pretend to be a food critic to receive attentive service. It can be a bit much, these often entitled trauma narratives disguised as restaurant reviews on Yelp, but it's progress that discriminatory service is less of an issue.

This, from Reichl, seems like just sensible advice in general for dealing with criticism in this age since you can take slings and arrows from any direction if you so much as set up a presence on the internet.

What would you say to a restaurateur who’s thinking about responding to a bad review? 
 
I would follow the Danny Meyer model. Good review or bad, I never wrote a review that Danny Meyer didn’t send me a personal note, always thoughtful and not defensive. If it was a bad review, it was thank you for pointing this out, and I’ll deal with it. If it was a good review, he always added something to it. I think you have nothing to lose by writing a thoughtful note or response to a critic. You have everything to lose by writing a nasty note. The nastier the note you write, the more it will be picked up by social media and the worse it will be for you. No matter how angry you get about it, imagine a million people reading your response. Whining gets you nowhere.

Information tech and variety

Abstract:      
Using the food truck industry as the setting, we provide direct evidence for how information technology can complement consumption variety in cities by reducing spatial information frictions associated with locally produced goods. We document the following facts: 1) food trucks use technology to overcome a spatial information friction; 2) proliferation of technology is related to growth in food trucks; 3) food trucks use their mobility to respond to consumer taste-for-variety; and 4) growth in food trucks is positively correlated with growth in food expenditures away from home. Taken together, our results illustrate how information technology can provide a meaningful increase in variety for urban consumers.
 

Research paper titled Information Technology and Product Variety in the City: The Case of Food Trucks.

It's not just food variety that's increased thanks to information technology, though food trucks are one of the more peculiar instances. I lived in LA from 2006-2011, and that city's lower flatter, more dispersed distribution of retail and people might have made it an optimal ground zero for the food truck boom.

Amazon has increased our retail variety expectations. The internet and the web have increased the variety of information we expect to find with a query typed into a search engine. Information technology plus urban density are an intertwined network that overcomes much of the spatial friction of the past, which is why it's so odd to me that it's still so hard to find good versions of so many types of ethnic food in San Francisco.

The lady had dropped her napkin

The lady had dropped her napkin.
 
More accurately, she had hurled it to the floor in a fit of disillusionment, her small protest against the slow creep of mediocrity and missed cues during a four-hour dinner at Per Se that would cost the four of us close to $3,000. Some time later, a passing server picked up the napkin without pausing to see whose lap it was missing from, neatly embodying the oblivious sleepwalking that had pushed my guest to this point.
 
Such is Per Se’s mystique that I briefly wondered if the failure to bring her a new napkin could have been intentional. The restaurant’s identity, to the extent that it has one distinct from that of its owner and chef, Thomas Keller, is based on fastidiously minding the tiniest details. This is the place, after all, that brought in a ballet dancer to help servers slip around the tables with poise. So I had to consider the chance that the server was just making a thoughtful accommodation to a diner with a napkin allergy.
 
But in three meals this fall and winter, enough other things have gone awry in the kitchen and dining room to make that theory seem unlikely. Enough, also, to make the perception of Per Se as one of the country’s great restaurants, which I shared after visits in the past, appear out of date. Enough to suggest that the four-star rating it received from Sam Sifton in 2011, its most recent review in The New York Times, needs a hard look.
 

Pete Wells of the NYTimes drops Per Se from 4 stars to 2.

I have no idea if Wells is right or not, but I can't think of too many other food writers who can make a restaurant review as pleasurable to read. Writing about food is like writing about music; language can feel like an inadequate medium for describing something which we experience through our senses, bypassing the symbolic representations of words. Wells avoids those traps by, in large part, not trying to describe tastes.

The Hottest Restaurant of 2081

Matt Buchanan conjures an interview with New York's hottest chef...of the year 2081.

On a warm, very yellow November morning, I met the chef Paul Nova in front of his new restaurant, Farm & Table, which is finally set to open next week after two years of intensely secret research and development. 2081’s most anticipated new opening occupies the first flood-safe floor of a six-story trapezoid of condos, but it's a remarkable contrast to the checkerboard of glass and steel that wraps around the top five stories—bright, heavy wood doors open into a room of fifty seats that's lit by scavenged orange incandescent bulbs, littered with the occasional hunk of heirloom cast-iron industrial equipment. Otherwise, the space is a collection of all-wood everything, from wall to wall to wall—festooned with the occasional animal trophy, half of the species extinct—that looks and feels sturdy and knotted, not like re-composited bamboo or synthetics, but old, lived-in wood from trees that once grew tall and strong.
 
Nova’s new project is both of a piece and pointedly different from his first megahit restaurant. Toro! Toro! Toro! was a revival of the clubby, twentieth-century fin de siècle sushi restaurants where Nova’s exquisitely perfect reproductions of extinct fish—in terms of fidelity of texture and clarity of flavor, years beyond practically any other plant-based replication of seafood in the last decade—revealed him as a trailblazer in the medium of engineered protein. Predictably, it spawned wave after wave of imitators, and while no one has come close to his craftsmanship or success, the rumors are that with his second revivalist restaurant, Nova is pushing beyond optimized protein to a new horizon, one that has been uncharted for years: real meat.
 

I often ask people what common practice of today will be regarded by subsequent generations as horrific, because it's inevitable, isn't it? As much as I'd like to say retweeting praise, I'm more confident that we'll look back on our raising of animals in horrific conditions for our consumption to be abhorrent.

That's not the only thing Buchanan imagines will be an opportunity for nostalgia in 2081.

The biggest aspect of it, besides the real food, will be real service. We're going a step further than Toro! Toro! Toro!, and you won't even interact with any software when you come in: We're going to have human hosts in these wonderful knit hats and chambray shirts and classic selvedge jeans who take you to your seat, another human who takes your order, and another who brings the food to you, and yet another who clears the table. I don't know any other restaurant that will have as many bodies as ours will, certainly not as carefully adorned in period dress.
 
You'll even get the bill written down on paper—we found a lot of these GREAT vintage Moleskine pads, very period—and you'll pay a separate small fee, like twenty percent, to the servers if they do a good job. (It sounds weird, but people used to do this routinely! We're including a keepsake booklet for every guest that explains how to figure out the amount.) We're even chucking dynamic pricing for this restaurant. The only things that'll be different than how it used to be back then is that you can't pay with paper like people used to, because of the blockchain, though if we could figure out a way to make that work, we totally would.

Cuisine and Empire

Great episode of the podcast Econtalk featuring guest Rachel Laudan, author of Cuisine and Empire, an instant purchase for me. Host Russ Roberts has a fascinating diversity of guests on his show, but always takes an interesting angle into the conversation, one that is driven a lot but not entirely by economics.

Some of my favorite moments from this episode. First, on the history of the potato, and just think about how many ideas are packed into just this short exchange.

Russ: And, just to stick with basics for a minute: At one point, quite surprising to me, quite late in the book, you mention the potato. I think of the potato as a very basic foodstuff. But you point out that the potato is a relatively late invention. Talk about its cultural significance and a little bit about its history. 
 
Guest: Well, the potato is one of a series of roots--roots in a culinary sense, that is, underground bits of plants that can be cooked into edible foods. They have--the roots have always been of less interest to civilized societies because they are so wet and heavy you cannot provision them [?] fit to use with roots. Now, the one exception or partial exception to this is the high Andes mountains where they did grow potatoes and use them from early on. But they developed an incredibly elaborate way of freeze drying them to make them light enough and storable enough to go into cities as well as combining them with maize, which by then was down there. So when the potato comes into Europe, it's an enormous cultural effort to integrate the potato into the European food system, because for anyone who lives in a settled society with cities, root-eating is a sign of basically being more like animals. Roots were animal food in Europe. And so basically the poor of Europe had to be bludgeoned into adopting the potato in the 17th and 18th century. 
 
Russ: It's a little hard to understand because I really love French fries, and it's hard to imagine how someone could resist this. But they didn't have French fries. Talk about what they had. 

Guest: Well, basically, fat is very expensive for most people. So French fries, until the 1960s, 1970s, well they weren't invented until the middle of the 19th century, late 19th century. But until the invention of frozen French fries in the 1960s and 1970s, French fries were for the elite. Only the richest people could afford the potatoes that were cooked in that much fat. And double-cooked in that fat--which is what you have to do for French fries. What you find in the 19th century, as fats become more available for a large bulk of the population is that potatoes become more acceptable. Because you can put butter on your boiled potatoes; you can layer potatoes with milk and cheese and make a gratin; you can bake them and add butter. And that fat makes them much, much more palatable. 

Russ: But the point you make in the book is that the potato that was first introduced--I think in the early 18th century-- 

Guest: Right. 

Russ: was bitter, and nothing like the Idaho baked potato that we might envision at a potato bar. 

Guest: No. I've been concentrating in talking to you on the cooking and processing side, but there was also this agricultural trick they had to pull off to turn a plant that lived 8,000, 10,000 feet in the Andes, where seasons are reversed from Northern Europe, into a plant that would grow successfully and be palatable in Europe and the United States. And that took 100 plus years. 

Russ: And that's true of a lot of the things that we eat, I assume. I assume that if we went back to the 15, 16, 1700s and looked at what they called a 'blank'--whatever blank is, we would find it almost unrecognizable and very unattractive. Is that fair? Or am I being too harsh? 

Guest: Yes. Very few fruits--there are a few: dates, grapes--are palatable [?] without breeding. But most fruits have been systematically bred over the centuries. Animals have been bred. Probably the only things that we regularly eat but taste as they would have done hundreds of years ago are fish of various kinds. But everything else is the result of human breeding. 

Russ: Yeah, the goal of fruit has been to make fruit more like an M&M, and it's working evidently. 

Guest: Exactly.
 

Many parallels to the invention and then diffusion of technology. First, it's available to just the aristocrats, wealthy, and/or elite. The cost of production comes down with scale, and then it's brought to the masses. Finally, the wealthy go in search of some other way to signal their status, to differentiate. It's one reason behind the rise of extravagant $250 prix fixe menus in which guests photograph each dish as if it were their newborn child.

Food has replaced music at the heart of the cultural conversation for so many, and I wonder if it's because food and dining still offer true scarcity whereas music is so freely available everywhere that it's become a poor signaling mechanism for status and taste. If you've eaten at Noma, you've had an experience a very tiny fraction of the world will be lucky enough to experience, whereas if you name any musical artist, I can likely find their music and be listening to it within a few mouse clicks. Legally, too, which removes even more of the cachet that came with illicit downloading, the thrill of being a digital bootlegger.

Once, it felt like watching music videos on MTV was a form of rebellion in plain sight. Nowadays, the channel doesn't play any music videos. Instead, we have dozens of food and cooking shows, even entire channels like The Food Network dedicated to the topic. Chefs have become elevated to the status of master craftsmen, with names that have risen above the status of their restaurants, and diners revere someone like Jiro of Jiro Dreams of Sushi fame the way a previous generation worshipped the guitar sound of a rock god like Jimi Hendrix.

The food scene today offers a seemingly never-ending supply of scarce experiences, ingredients, and dishes. Cronuts you have to wait in line for a few hours to get your hands on. Pop-up restaurants that serve only on a few nights a week for a few weeks, then disappear forever. Restaurants that you have to sacrifice a goat to just to get a reservation, and then they'll actually take that goat you killed and prepare your entire dinner from it, nose to tail. A white truffle add-on that tacks $80 on to a single piece of cured hamachi, and oh, the truffle is only available for four weeks a year and came over on a gondola from Alba, Italy, and the hamachi is one of the last of three members of its species so you know, you should probably try it before...oops, sorry, the chef says someone just ordered the last of it. Yep, it's that couple at the corner table, and that's the last plate that she's Instagramming right now.

It's not just the scarcity of the actual food that offers such signaling opportunities. You can generate your own scarcity just by having a broad palate. When it comes to dining, many people still have narrow bands of taste, so if you're from the Jonathan Gold school of adventurous dining, you can easily set yourself apart by ingesting something exotic, like tripe stew, or some part of an animal that most people didn't even know was edible and certainly wouldn't dream of consuming.

In more recent history, the tech world has spawned yet another branch of food religion with the invention of Soylent, representing the polar opposite of the foodie religion, with its reverence for organic ingredients, elaborate preparation, and theatrical plating. Soylent is the food for people who find cooking and eating to be a waste of time, a complex job in need of simplification. It is dining function over form, with Soylent promising to deliver an exact and efficient dose of the nutrition we need as humans. I love food and dining with others too much to ever be an acolyte of this school, but that it exists is proof enough of how broad and diverse the world of food has become.

In contrast, punk rock and other formerly edgy genres of music have been assimilated by the mainstream to such an extent that flying your freak flag is harder and harder through your musical tastes. Ironically loving Taylor Swift or pop music has somehow become more iconoclastic than listening to some indie band. That Apple has so dominated the act of musical consumption (through ubiquitous iPhone and iPods and white earbuds, TV commercials for said devices and the music services accessible through them, and the massively popular bands that play at their keynotes) has mainstreamed the very idea of listening to music.

Back to the Laudan episode of Econtalk. Here she is on how French high cuisine came to be the eponymous fine dining of the world. 

Russ: You are what you eat, I guess, is an appealing idea, and to some extent true. But maybe not to the extent they used to believe. So, we have the British having a big influence on world cuisine at the end of the 19th century. Somehow, French cuisine becomes the standard of sophistication and high dining. How did that happen? And it still persists, to some extent. It's lost some of its caché, I'd say in the last 50 years. But it still remains a standard of high dining. How did that come about and why was it important? 
 
Guest: I think it's first important to say it's French high cuisine, because the high cuisine of France that became the international standard was something that most French people had never seen and never ate. It did not come, swell up from the peasantry. There's a slightly complicated story about what happened around 1650 when you get a rapid political change and the establishment of, after the Peace of Westphalia, a series of nations in Europe, on supposedly equal terms, combined with a shift of the scientific revolution and the Protestant revolution. And in complicated ways these would act together to produce a new cuisine that the world had never seen before. It's a really striking example of radical and rapid culinary change. The old cuisines of spiced food that--ultimately stemming from Persia but that had really influenced China, dominated in the high cuisine of India, right across to Southern Europe, were displaced by this new Northern European cuisine. And the people who developed it in its most elaborate form, because they had the greatest resources--the richest courts--were the French. And they developed it really terribly rapidly between 1650 and 1700. And that's the point where diplomacy is become important because of this national state system. And the national state system needs something to use for diplomatic dinners, to demonstrate modernity, Europeanness against the Persian-type cuisines that existed before. And so French high cuisine becomes the cuisine of European diplomacy in the 18th century, and then of international diplomacy and the international elite in the 19th century. So that by 1880 you could go to Tokyo, you could go to Santiago de Chile, you could go to Sydney, you could go to San Francisco and the thing to be eating was, if you were really rich or you were really high in politics was high French cuisine.
 

I think French high cuisine is on the decline from its perch atop the world's dining hierarchy, at least among the most passionate U.S. based food lovers. Japanese cuisine is a strong competitor, and the overfishing of the world has added an element of barbarism but also scarcity to eating sushi that gives an extra thrill. That it is perceived as healthier than French high cuisine, with its butter-based sauces and rich, fatty cuts of meat, makes Japanese cuisine better-suited to carry the banner forward in this decade in which a healthy diet has become a high status problem. For a variety of reasons, at its high end sushi can also justify the nosebleed prices that people expect from high status symbols and pastimes.

In the first world, access to enough food to survive is no longer an issue, and so cooking has ascended into the realm of art. Some meals I've had in the past few years are as much performance and theater as they are a way of refueling. With our insatiable desire for narrative, we've enlisted a meal out as a story we first consume and then that we tell about ourselves.

Given America's relative youth as a nation, our national dining habits have always been a cultural battleground. The U.S. came about long after the age when food was a scarce commodity, so most of our wrestling with its meaning has been first around its symbolic value, and now more recently about how to optimize our relative consumption of different types of food like carbohydrates, fat, and protein.

If any food symbolizes American dining today, it's the hamburger. Laudan can take such a humble food item and connect it to forward and back through our nation's culinary history.

Guest: Well, if I may I'd like to back up a tiny bit about presidents serving French dinners, because the American presidency has had a terrible time deciding what to do at diplomatic dinners from the get-go. There were those, like Jefferson, who said we've got to be part of international culture as well as the economy, and we should go with high French cuisine. But there is also this extraordinarily strong republican--with a small 'r'--tradition in America that's part of what the Revolution is about. And the republican strain in American thought said very emphatically that, 'No, we do not want high French cuisine. We do not want aristocratic dining. That is not appropriate. And they looked back to the Roman republic and to the Dutch republic and to other republican movements in Europe and said, 'What we need is a decent cuisine for all citizens.' And that is very much the origin of Thanksgiving, which is not a fancy French dinner for diplomats but a dinner that essentially all Americans can afford and can cook, of American ingredients. It's a kind of striking symbol of the republican tradition exemplified in an American custom, and was deliberately designed to be so. But what happened--I mean the hamburger is just sort of amazing. People say, 'Well, the British had fish and chips.' Well, fish and chips don't cut it, because fish and chips are not this beef, bread, French fry phenomenon. And what Americans managed to do beginning with White Tower but pulled off triumphantly by McDonald's is to make the food of aspiration worldwide something that in America everybody can afford, and in much of the rest of the world the middle class can afford, namely a kind of ersatz piece of roast beef or steak that is a beef hamburger on a piece of white bread with a bit of fresh vegetable out of season, even in the winter, with a sauce which is part of high cuisine, with French fries, which, you know, are popular--which become really widespread with McDonald's and the frozen French fry, which Simplot perfects--until then the French had said it was the apex of French civilized food--and washed down either with a sparkling cold drink or with a milkshake, sweet and rich and cold and foamy. That is just--it makes the food of aspiration accessible to all, and you have it in this brightly lit dining room that is clean, that you have access to. I think only if we understand how McDonald's taps into all these competing traditions that go back so deep in our culture can we understand why it became such a kind of fire point for and against modern American food.
 

If McDonald's has been the 600 lb. gorilla of American dining in the past several decades, perhaps Chipotle is a more suitable totem of this current age of our culinary anxieties and obsessions, if one even exists anymore given our increasingly diverse dining habits. Chipotle serves an ethnic food, derived from one of our largest immigrant populations, but transformed into something palatable for the masses, claiming to be sourced with only GMO-free, organic ingredients, served in franchises that are clean if somewhat generic, available in the places we inhabit, from cities to suburbs to highway stops. It's not food prepared by ourselves, and we eat our burrito bowls at our desks, alone, a fork in one hand, our smartphones in the other, scrolling one of our many feeds while we feed our stomachs. Chipotle represents something America does better than any country, this assimilation of the world's people and ideas and then a subsequent radiation of that back out to the world in a form more agreeable to the masses. Harvey Weinstein used to do the same to niche independent films.

Laudan's most controversial opinion, though one that is likely quite widespread among economists, is that our reverence for natural food and distrust of industrialized, processed food is the reverse of what it should be. Her piece In Praise of Fast Food was published years ago but is as relevant as ever given current attitudes.

As a historian I cannot accept the account of the past implied by this movement: the sunny, rural days of yore contrasted with the gray industrial present. It gains credence not from scholarship but from evocative dichotomies: fresh and natural versus processed and preserved; local versus global; slow versus fast; artisanal and traditional versus urban and industrial; healthful versus contaminated. History shows, I believe, that the Luddites have things back to front.
 
That food should be fresh and natural has become an article of faith. It comes as something of a shock to realize that this is a latter-day creed.

...
 

Eating fresh, natural food was regarded with suspicion verging on horror; only the uncivilized, the poor, and the starving resorted to it. When the ancient Greeks took it as a sign of bad times if people were driven to eat greens and root vegetables, they were rehearsing common wisdom. Happiness was not a verdant Garden of Eden abounding in fresh fruits, but a securely locked storehouse jammed with preserved, processed foods.
 

As for slow food, it is easy to wax nostalgic about a time when families and friends met to relax over delicious food, and to forget that, far from being an invention of the late 20th century, fast food has been a mainstay of every society. Hunters tracking their prey, shepherds tending their flocks, soldiers on campaign, and farmers rushing to get in the harvest all needed food that could be eaten quickly and away from home. The Greeks roasted barley and ground it into a meal to eat straight or mixed with water, milk, or butter (as Tibetans still do), while the Aztecs ground roasted maize and mixed it with water (as Mexicans still do).
 

What about the idea that the best food was country food, handmade by artisans? That food came from the country goes without saying. The presumed corollary—that country people ate better than city dwellers—does not. Few who worked the land were independent peasants baking their own bread and salting down their own pig. Most were burdened with heavy taxes and rents paid in kind (that is, food); or worse, they were indentured, serfs, or slaves. They subsisted on what was left over, getting by on thin gruels and gritty flatbreads.
 

The dishes we call ethnic and assume to be of peasant origin were invented for the urban, or at least urbane, aristocrats who collected the surplus. This is as true of the lasagna of northern Italy as it is of the chicken korma of Mughal Delhi, the moo shu pork of imperial China, and the pilafs, stuffed vegetables, and baklava of the great Ottoman palace in Istanbul. Cities have always enjoyed the best food and have invariably been the focal points of culinary innovation.

I excerpt Laudan heavily, but it's only a fraction of her output not just on the podcast episode but in writing. All of what I've read thus far is fascinating.

This tendency to romanticize the past, to imagine it as a pastoral paradise of harmony between people and nature and each other, is an odd human trait. Dissatisfied with the present, we look to the past for an answer, as far back as our caveman days when it comes thing like the paleo diet, even if we hardly realize just how much harder and treacherous and brutal life was back then. Your pastoral fantasy? Here's how it ends, with you stepping in cow shit, contracting cholera, and dying after several feverish nights in an unheated bedroom, at the age of 20.

As for the future? Well, most of our most popular visions of the distant future are dystopic, either tales of stragglers trying to survive in a post-apocalyptic wasteland, or prophecies of human enslavement by AI run amok. When our society does survive into the next century, it has often morphed into a nightmarish surveillance state, where all human diversity in thought and being has been stamped out. You are both funny and headstrong? You are...divergent. Still one of the silliest ideas for a book and movie in recent memory.

All this despite a steady rise in the quality of life throughout human history, with increasing tolerance and leisure and life expectancy. All evidence is that the arc of the moral universe is long, but it bends towards justice. From scarcity to abundance. And taller, healthier people. I myself am very happy I wasn't born in an age when I'd have to be a farmer just to feed myself. 

Watch enough movies, though and you'd think that the arc of human life bends into a black hole, an apocalypse, from which we have to start over again, with the seeds of of the rebirth of civilization being resown by a lone hero, most often a white male, and the beautiful woman who grew to love him sometime during the second of the three acts of the script.

Beware your nostalgia for an age you never lived in. It was probably worse then than it is now. Given our increasing resolution of detail in recorded history, I wonder if future generations will be more immune to nostalgia. I'm somewhat hopeful. I told my nephew recently that when I was his age, I had no iPhone or iPad. I hadn't even seen a desktop computer yet, let alone the web.

“Whaaaaaaaat?” he said. “That sounds terrible.” Then he went back to playing a game on his mom's iPhone.