Catch up

It has been some time since I posted here. Outside of lots of meetings around the country and some trips with family and friends, a few creative projects have stolen the lion's share of my free time.

While I won't publish some Medium screed on how spending less time on social media transformed my life, it is an unavoidable truth that one's free time is a zero sum game. For infovores, Twitter is a bit like heroin, and for all the other gaps in one's time, other social media apps are like some Cerebro-like viscous membrane that gives off a mild contact high from the vibrations of ambient social intimacy.

As presently constructed, though, all these apps are certainly well into the point of diminishing returns for me, and so less time spent there, redirected offline, has been good for my general productivity and well-being. I'm not certain, but it seems that's it not a question of mix as it is of finding the optimal frequency for all the various activities in my life. To take one example, almost certainly I see huge returns to shifting conversations with folks on Twitter offline.

Some of that time has been spent continuing to wend my way through Emily Wilson's brilliant new translation of The Odyssey. What's fascinating is how it remains resonant with modern times, speaking to its universality. Ironically, what it reminded me of, perhaps because the topic was still top of mind, was social media.

Take the famous episode in which Odysseus and his men sail past the Sirens and then between Scylla and Charybdis. What surprised me was how short the entire episode is, only occupying a few pages in Book 12, titled "Difficult Choices."

The goddess Circe gives Odysseus a preview of what he and his men are about to encounter.

First you will reach the Sirens, who bewitch
all passersby. If anyone goes near them
in ignorance, and listens to their voices,
that man will never travel to his home,
and never make his wife and children happy
to have him back with them again.

"If anyone goes near them in ignorance, and listens to their voices..." But this is what happens on social media all the time! Never have we dilettantes in just about every subject had such a forum to lord our "expertise" over others. Circe warned us long ago what would happen, how insufferable we'd all be to our loved ones.

The song of the Sirens is irresistible, and Circe knows it, so she advises Odysseus thus:

...Around about them lie
great heaps of men, flesh rotting from their bones,
their skin all shriveled up. Use wax to plug
your sailors’ ears as you row past, so they
are deaf to them. But if you wish to hear them,
your men must fasten you to your ship’s mast
by hand and foot, straight upright, with tight ropes.
So bound, you can enjoy the Sirens’ song.

It's as if Circe is speaking to my irresistible urge to open and read Twitter at the slightest hint of boredom, warning me of the great heaps of men, flesh rotting from their bones, who'd done so before me. As for her firm guidance that Odysseus be bound to a mast? That's just the antecedent to today's "Never tweet."

Thus, in my moments of weakness, I open Twitter but bind myself to a metaphoric ship's mast so I cannot reply to the trolls, as tempting as it is to join the chorus of people letting their outrage loose. Some days it feels to me that half my timeline is just people posting witty and savage rejoinders to Tomi Lahren or Trump or Dana Loesch and so on. Twitter should just move all of that to a separate tab, it has become a sort of performance art.

Alexis Madrigal wrote of how he turned off retweets in his Twitter timeline and it improved for him.

Retweets make up more than a quarter of all tweets. When they disappeared, my feed had less punch-the-button outrage. Fewer mean screenshots of somebody saying precisely the wrong thing. Less repetition of big, big news. Fewer memes I’d already seen a hundred times. Less breathlessness. And more of what the people I follow were actually thinking about, reading, and doing. It’s still not perfect, but it’s much better.

Farhad Manjoo wrote that for two months he got his news only from print.

It has been life changing. Turning off the buzzing breaking-news machine I carry in my pocket was like unshackling myself from a monster who had me on speed dial, always ready to break into my day with half-baked bulletins.
Now I am not just less anxious and less addicted to the news, I am more widely informed (though there are some blind spots). And I’m embarrassed about how much free time I have — in two months, I managed to read half a dozen books, took up pottery and (I think) became a more attentive husband and father.

Is this much different than Circe urging Odysseus to plug his mens' ears with wax? Homer got there first. I am weak so I have not gone full cold turkey on social media. Instead, I am still occasionally there, tied to the mast, flailing against self-administered bonds, listening to the Siren song. May the gods help me.

[Wilson herself recently posted a series of tweets observing something else intriguing about the Sirens, the idea that they were some sexy seductresses. Reading Wilson's translation, you realize there is no mention of the Sirens' appearances. The seduction is all in their song, and that makes them an even more appropriate metaphor for social media.] 

After the Sirens, Odysseus and his men meet even more formidable adversaries. Circe foretells of an inescapable passage between Scylla and Charybdis, the original rock and a hard place. There, she says, it's best to pick the lesser of two evils and to sail closer to Scylla, a twelve-legged six-headed monster who will eat six of his men. It sounds terrible, but the alternative is allowing Charybdis to swallow his entire ship. For my money, it's the most famous leadership parable about minimizing one's losses.

Odysseus, upon hearing this, pleads to no avail.

I answered, ‘Goddess, please,
tell me the truth: is there no other way?
Or can I somehow circumvent Charybdis
and stop that Scylla when she tries to kill
my men?’
The goddess answered, ‘No, you fool!
Your mind is still obsessed with deeds of war.
But now you must surrender to the gods.
She is not mortal. She is deathless evil,
terrible, wild and cruel. You cannot fight her.
The best solution and the only way
is flight.

Is Circe the best life coach, or the best life coach? She's the original Tony Robbins.

Can you read social media and emerge with your senses and emotional well-being intact? "No you fool!" We may not be able to avoid it, but at least we can heed Circe's words. "The best solution and the only way is flight."

Odysseus and his men proceed as Circe warns, and, tied to the mast, our titular hero hears the song of the Sirens.

‘Odysseus! Come here! You are well-known
from many stories! Glory of the Greeks!
Now stop your ship and listen to our voices.
All those who pass this way hear honeyed song,
poured from our mouths. The music brings them joy,
and they go on their way with greater knowledge,
since we know everything the Greeks and Trojans
suffered in Troy, by gods’ will; and we know
whatever happens anywhere on earth.’

Their song was so melodious, I longed
to listen more. I told my men to free me.
I scowled at them, but they kept rowing on.

What is this but the siren song of Twitter and Facebook and Instagram and all the other addictive apps on our phones, luring us with the comforting and self-affirming dopamine hits of likes and followers and readers. "...they go on their way with great knowledge since we know everything...and we know whatever happens anywhere on earth" is nothing if not the tagline for Twitter written in another age (copyright Homer).

"Their song was so melodious, I longed to listen more." My Siren is my iPhone, always within arms reach, always with the promise of "greater knowledge." Have I been disciplined and avoided its call? Not always. And like Odysseus, who does end up losing six men to Scylla, I've lost a few chunks of flesh along the way.

I do have a few long posts incubating, however, which I hope to finish soon. In the meantime, a bit of catch up.


I was lucky enough to be invited onto two podcasts, both of which were recorded in person during my recent trip to New York City for meetings and to visit family. The first was Khe Hy's Rad Awakenings podcast. The second was the Internet History Podcast hosted by Brian McCullough. I didn't have a book or anything to promote, so they're both a bit free-ranging, as I am here. Check them out if you're interested and let me know what you think.

It's fascinating to watch the explosion in podcasts, and it's somewhat apparent when you see how easy it is to record one with just a computer and two small microphones. Given the economics of text are so lousy, and given how challenging it is to produce compelling video, the most lucrative vector for media companies is not a pivot to video but a pivot to podcasting. Every day it seems a media company is releasing a new daily news podcast recap.

In time, the marginal return will decline, but perhaps not before we see a second wave of growth in podcasting's total addressable market (TAM) from improved discovery (the first explosion in podcasting TAM was, of course, the rise of the smartphone, which opened up a ton of podcast surface area in one's daily schedule, most notably in commutes).


I kid not, one of the most fascinating videos I've watched since I last posted here was this episode of Trashcast discussing Logan Paul. For some reason the original version of this video was pulled by YouTube so as of right now, this newly uploaded version has all of...63 views. It taught me more about the Logan Paul phenomenon than anything else I've read or watched, and its presentation is of a style that is extremely meta, like a young person's Vox explainer.

The temptation, when something like the Logan Paul scandal drops, is to post "Who the f*** is [Logan Paul]?" on Twitter or Facebook. I saw probably a dozen or more such posts, and while I resisted the urge, I myself had no idea who Logan Paul was until he was the latest person to take his turn in the public pillory.

I'm less interested in Logan Paul than I am in all the superstar vloggers who can turn out audiences of tens of thousands young kids everywhere they go. Their particular pull to children of that age, the visual grammar of their content, the syntax of their speech, their distribution frequency, it's all quite instructive.

One can read near-future sci-fi, or one can just spend some time with some of today's youth, who already live in the near-future. The latter is much more vivid. I spent several hours watching my nephews play Fortnite and message on Snapchat and surf on Instagram while in NYC recently, and it was as if I'd crossed over through some alien border into a cultural Shimmer. As with Natalie Portman, every one of my visits there leaves me altered in some inexorable ways.


One of my recent (okay, not so recent) posts was on the shift in entertainment from the shift to infinite content supply. I opened with a brief discussion of Will Smith.

A few readers sent me a link to this excerpt from Ben Fritz's new book The Big Picture: The Fight For the Future of Movies. The excerpt is about the rise and fall of the A-List movie stars Will Smith and Adam Sandler during Sony's motion picture heyday in the 2000's.

Of Sony's top 50 movies from 2000 to 2016, more than two-thirds were "star vehicles," in which the talent involved was as big as or bigger than the movie title or the franchise. More than one-third came from just two people: Will Smith and Adam Sandler. Movies they starred in or produced grossed $3.7 billion from 2000 to 2015, generating 20 percent of Sony Pictures' domestic gross and 23 percent of its profits. No other studio was as reliant on just two actors. Their rise and fall illustrate what has happened to movie stars in Hollywood.
Sony paid both stars handsomely for their consistent success: $20 million against 20 percent of the gross receipts, whichever was higher, was their standard. They also received as much as $5 million against 5 percent for their production companies, where they employed family and friends. Sony also provided Overbrook and Sandler's Happy Madison with a generous overhead to cover expenses — worth about $4 million per year. To top it off, Sandler and Smith enjoyed the perks of the luxe studio life. Flights on a corporate jet were common. On occasion, Smith's entourage necessitated the use of two jets for travel to premieres. Knowing that Sandler was a huge sports fan, Sony regularly sent him and his pals to the Super Bowl to do publicity. Back at the Sony lot, the basketball court was renamed Happy Madison Square Garden in the star's honor. When anybody questioned the endless indulgence given to Sandler and Smith, Sony executives had a standard answer: "Will and Adam bought our houses."

I wrote:

I'm wary of all conclusions drawn about media in the scarcity age, including the idea that people went to see movies because of movie stars. It's not that Will Smith isn't charismatic. He is. But I suspect Will Smith was in a lot of hits in the age of scarcity in large part because there weren't a lot of other entertainment options vying for people's attention when Independence Day or something of its ilk came out, like clockwork, to launch the summer blockbuster season.
The same goes for the general idea that any one star was ever the chief engine for a film's box office. If the idea that people go see a movie just to see any one star was never actually true, we can stop holding the modern generation of movie stars to an impossible standard.

Of course, this is a counterfactual, so hard to establish conclusively. Perhaps, in the age of scarcity, A-List stars really did exist. Regardless, that age has passed, and banking on its continued viability is a shaky proposition at best.

A further thought, which I first made in a presentation at a Greylock Product Summit a few years back, is that the rising supply of content means that exceeding the noise floor favors a different type of film or television property. In the heyday of the three and eventually four major networks, the golden age of broadcast television, the dream show was one with broad appeal. The economics of television were heavily dependent on advertising revenue, and the larger the audience, the larger the revenue. A show like The Cosby Show or The Beverly Hillbillies, that attracted a broad audience through a sort of non-offensive if somewhat bland sensibility was the dream.

Again, though, it's important to recall how scarce entertainment options were in that age relative to today's cornucopia. It isn't just the economics of carriage fees and pay TV that helped drive the rise of much more distinctive and niche appeal shows like Mad Men; it's what you'd expect when the overall information noise floor rises. The risk of trying to make a broad appeal show is that it is mildly appealing to many people but not strongly appealing to any audience segment, and that is a losing strategy if the noise floor is so high that only high appeal shows can poke their head above it.

Is it any surprise that two of the most successful showrunners in recent history are Shonda Rhimes and Ryan Murphy? Watch any of their programs and, whether you like them or not, you won't fault them for pulling their punches. Scandal, How to Get Away With Murder, American Horror Story, Nip/Tuck, Glee, The People Vs. O.J. Simpson, these are programs that are engineered to mash people's buttons.

Two of the bigger hits of recent memory that aren't from either of those two showrunners are  Empire and This is Us. The former was, like many of Rhimes and Murphy's shows, crazy. Double crosses, murders, affairs, all of it. Cray cray. As for This is Us, I watched two episodes with my sister-in-law while in NYC, and while it might seem to fit the template of a more classic, broad appeal broadcast network show, it is bonkers in its own way. Its genre is melodrama, and the episode design is a tear-jerker in every episode. Every one. No exceptions. If you are a writer on that show and your episode doesn't the audience cry they fire you and then everyone has a good cry over it.

In a world of infinite content, the ideal bundle, then, isn't a basket of broadly appealing programs, something that may be impossible to engineer anymore. Instead, it's a bundle of shows with very strong niche appeal to particular but different audience segments. This, as many of you will note, is not some new concept. The conditions have just made it a more critical one.

In the Hollywood Reporter, Marc Bernardin observes the success of films like Wonder Woman, Get Out, Black Panther, and Coco, and notes:

No, the reason we're in the midst of a halcyon age of representational storytelling that's resonating on a historic scale is that a far more diverse pool of storytellers — black filmmakers, female filmmakers, Asian filmmakers — are getting empowered to tell their stories their way with all the resources usually reserved for white, male creatives. Black Panther isn't just the story of a handsome prince taking the throne of a fictional, advanced African nation, it's also the story of a filmmaker reckoning with the disconnect that lives in the hyphen between "African" and "American." It's about a man who grew up around women of strength and grace and power who didn't think twice about populating both his art and his set with those same kinds of women. It's about a kid from Oakland dreaming dreams that the world told him he couldn't.
Similarly, Thor: Ragnarok would never have been both a balls-out buddy comedy with a perfectly timed anus joke and a trenchant examination of the paved-over sins of colonial expansion without the half-Maori New Zealander Taika Waititi at the helm. And we have proof positive of how Jenkins' centering of Diana in Wonder Woman is different from Zack Snyder's treatment of the same character in Justice League: More openness, innocence and resolve … fewer gratuitous shots of Gal Gadot's ass.
And there's no one who could've conceived of Get Out but Peele, who spent years exploring the ways race and genre collide on TV's Key & Peele, is a student of horror and has definitely found himself navigating the frothy waters of meeting a white girlfriend's parents for the first time.
The way forward isn't simply to decide to greenlight stories about diverse people. It's to cultivate a generation of writers, directors and producers who see the world through their own unique lens and then bring that perspective to bear. If Marvel didn't have someone like Nate Moore in its producer ranks, someone who knew who T'Challa was and what he could mean, you'd never get a Black Panther. If Pixar didn't elevate story artist Adrian Molina to co-director and co-writer, Coco might've seemed more like a Day of the Dead theme park ride than a haunting, heartbreaking exaltation of Dia de los Muertos.
What audiences are responding to, in every movie that's popped in the past year, is a sense of truth. Just as we can tell, somehow, when CG is spackled on a little too heavily, we can sense when something feels inauthentic. We can tell the difference between 12 Years a Slave and Amistad, between The Joy Luck Club and The Last Samurai, between Selma and Mississippi Burning. One of them feels true — and truth, ultimately, is what makes something universal.

I believe in the power of film as a medium, and so it's no surprise that I believe in the underrated power of representation. It's not underrated by those of us who've never seen ourselves on screen, but I recall talking to some white men about Wonder Woman, and they remarked how they didn't see what the fuss was about. I couldn't help but think of the group of women I saw Wonder Woman with; half of them left the theater in tears, the experience of watching a woman on screen was so viscerally moving. I think of the Mexican family seated next to me at a screening of Coco, who spent half the film sobbing audibly.

The only Asian men, let alone Chinese men, I saw on screen growing up were Mickey Rooney's bucktoothed caricature of a Japanese man in Breakfast at Tiffany's and Long Duck Dong in Sixteen Candles. If you've ever wondered why Bruce Lee is a near deity to Chinese men, it's simply that he was the only powerful representation of themselves they ever saw in American entertainment.

The archetype of almost every hero and leader I saw growing up was a white man, and it continues today, where the leadership team of almost every company in Silicon Valley is dominated by white men. Someone asked me once whether I could name a single Chinese CEO of a tech company who had been promoted into the role, rather than having founded the company. I couldn't think of one.

It's a blessing to me, then, that the age of infinite content has made culturally specific and truthful representation good business practice for Hollywood. I'd prefer we arrived by some more progressive route, but, as Russian writer Viktor Pelevin has noted, the chief protagonist of pop culture today is a briefcase of money. We've seen many a film with a whitewashed cast bomb recently, and it doesn't strike me as a coincidence. When we have an near infinite supply of content at our disposal, no one needs to settle for the bland, the milquetoast, the emotionally false.


In that same post about the shifting dynamics of entertainment in the age of abundance, I wrote about the Instagram account House of Highlights. Fast Company cited it in an article about House of Highlights.

The past week, I've been watching carefully to see which outlet picks up March Madness buzzer-beaters the quickest, and it is, more often than not, House of Highlights on which I see the first video replay.

Social networks go through several phases of evolution on their path to maturity. First, they need to get people to use it even when the graph is sparse. This is the single-player value problem. If they solve that, then the next efficient evolution is some sort of feed, usually populated by all content from people you follow. It's the easiest way to increase the surface area for each user, and it's the easiest way to amplify your service's network effects. The only way to increase a user's frequency of usage is to increase the volume of content to serve them, and aggregating content from all the people you follow is a simple way to personalize the feed, to create value for the lurkers who want to watch but not post, and to send addictive feedback signals to the creators of that content. It's the tried and true social network positive feedback loop.

Then, at some point, if the network is successful enough, the problem becomes one of too much content. This is typically when networks move from a chronological, exhaustive feed to an algorithmic feed on some relevance dimension. It's typically when some segment of early adopters complains about the loss of said chronological feed.

The algorithmic feed is social networks' counterpart to Inbox Zero. Social networks realized that an "inbox zero" solution to social network overload would never work; too few people would do the necessary work. Arguably, Inbox Zero has about the same adoption issue with regards to email.

GMail has a version of the algorithmic email inbox, it's the Important email box, and various other programs have tried to filter out unimportant emails from the inbox using a variety of strategies, but I'd be interested to see software go even a step further and prescribe more drastic measures for solving the signal-to-noise problem of that medium. If you're rich and powerful that solution is a stern administrative assistant but we've yet to scale that with AI. The closest I've come is my GMail's spam filter. I went in there recently and found a bunch of email I had actually subscribed to, but while the false positives were mildly annoying, I couldn't argue my life was harmed in any meaningful way. If you're waiting to hear from me, you're probably in my GMail spam folder, for some reason it's become increasingly aggressive.

Content services tend to try their own filtering solutions, tailored to their medium. Video streaming services use some mix of personalized and generic categorical recommendations to populate their interfaces, while news sites lean towards some matrix of chronology and importance overlaid with light categorization. Common to all of these is an acknowledgment that users don't tend to browse sideways through interfaces when exploring through the limited screen real estate of the smartphone screen, so maximizing relevance on a single infinitely scrolling interface window is the most profitable vector. Is it any surprise every video service seems to have autoplay turned on by default now?

This is all a roundabout way to say that House of Highlights will someday soon hit bump against the the limitations of the single news feed, despite all of that interface's advantages in aggregating eyeballs for content consumption and advertising on a smartphone screen. Like all providers, House of Highlights depends on the algorithm to push its content to people at the right time, and for those users to pull the content. I suspect that the next frontier for all these large and mature social networks is additional levels of in-feed structure.

We've already seen glimpses. The idea of stories, which made its first appearance in Instagram, solve the supply-side problem of social media. That is, in an exhaustive chronological feed, many users are shy about flooding the feed. This caps content supply.

Stories, by putting the onus on the viewer to pull the story, unlocks a flood of content. Post frequently, guilt-free! I'd guess that the demand on that content is limited, but paired with the regular algorithmic or chronological feed, you essentially create two marketplaces of content in one interface.

Instagram now allows multiple photos per post, another example of added structure. But for now, the algorithms largely restrict themselves to either choosing to display a piece of content or not. It's all candidate selection. 

I suspect the next breakthrough for all our most used mobile apps, all of whom have achieved massive scale, from Facebook to Instagram to Twitter to YouTube to Snapchat and so on, will be an evolution of the algorithm beyond pure content selection, and an evolution of the presentation of said content from into a broader array of templates.

It's a topic for another post.


Justin Fox of Bloomberg posted a piece related to my post and its discussion of brittle narratives. He notes that some folks have tried to address the problem of brittle narratives when it comes to sports. As an example, he links a video from Ben Falk's Cleaning the Glass, a popular new subscription service for basketball junkies from a former NBA front office staffer.

Writes Fox:

As with my experience in reading about and then watching UVA's Pack Line, it is also a reminder that there are narratives to sports events that go deeper than what can be plausibly condensed into standard highlight reels, and that casual viewers can be taught to appreciate them. I really am not much of a basketball fan, but Falk's explainer makes me want to observe James in action over extended periods to see if I can detect other such episodes of quiet brilliance. I probably won't; I've got way too many other things going on to add regular watching of the Cleveland Cavaliers to my schedule. But I am at least thinking about it.
In soccer, the sport I watch most on TV except in years when the Oakland A's are good, the highlight moments are so rare that you really can't appreciate the games unless you have some understanding (mine is admittedly pretty rudimentary and inarticulate) of the dramas playing out on the field between the scores and near-misses. In other sports, there have always been a few announcers who capably weave these background narratives into their work. I know Tim McCarver was driving most viewers crazy by the time he retired from calling baseball games in 2013, but I can remember him adding layer after layer to the game-watching experience in earlier years. From what I hear (I really don't watch much football), former Cowboys quarterback Tony Romo did that in his first go-round as an NFL analyst for CBS last season.
Right now, basketball seems to be generating the most such explanation, though. Maybe that's just because it's basketball season! But I also think there's a happy convergence of the sport's usually-in-motion nature; the emergence of a group of expert, articulate superfans that probably began with the rise of Bill Simmons; the NBA's willingness to accommodate superfans who know how to splice video; and the presence of stars who are not only very smart about the game (I imagine most basketball stars have always been that) but also willing and able to explain how it's played with startling clarity (a friend pointed me to Simmons's series of interviews with the Warriors' Kevin Durant, and what I've heard so far is pretty amazing). 1  If sports are in fact in a battle with narrative brittleness, this is how you fight it.

He hits on something important. All the sports leagues have to deal with an onboarding problem with their televised content, and that is the learning curve of appreciation. If you haven't grown up watching and/or playing a sport, it's difficult to appreciate a lot of the moment to moment skill on display in any sporting event.

I did not grow up playing soccer, so I find so much of it boring to watch outside of the occasional spectacular goal. The ability of a team to keep possession, the skill of a single player like Messi to evade a gauntlet of defenders, so much of that skill is lost on me. The same goes for hockey, or cricket, or so many sports I didn't grow up with.

On the other hand, while many find baseball unbelievably boring, I played growing up, and so even a pitch that isn't swung is seen, by me, as one in a fascinating game theory exchange between pitcher and batter. One of the most exciting plays of the 2016 World Series to me was when Kyle Schwarber laid off a tantalizing slider from Andrew Miller because I knew what a great pitch it was and how much skill it took to not offer at it. For most viewers, it was just another ball, another twenty seconds of inconsequential activity.

The Olympics face this problem in spades because they include so many niche sports, but luckily for them, many of the events are short in nature, and the nature of the contest easily explained. When it isn't, the networks lean heavily on personal narrative, something that almost all viewers understand. We can debate until eternity whether Alina Zagitova or Evgenia Medvedeva deserved the gold medal in the women's figure skating final, but it didn't take an expert on figure skating to feel the tension backstage as each skater tried to get in each other's heads.

More forward-thinking sports leagues should consider, in the future, making it easier for analysts of all sorts to provide alternative broadcast commentary for their broadcasts. I'd be shocked if it didn't happen in my lifetime. Viewing your sports as broadcast platform with API's allowing for such diversity of integrated analysis would broaden the appeal to different audiences. As it is, some audiences cobble together such alternate peanut gallery chatter from Twitter, Periscope, Facebook, and other social media. I predict leagues will start integrating this content; it makes much more sense than Twitter licensing those video rights to try to facilitate such water coolers. The water cooler is heavy, it's plugged into the wall, and it's expensive; easier to go walk over there to chat than to try to carry the water cooler over to the discussion.

Exceeding this learning curve of appreciation isn't sufficient, however. Beyond that, there still exists the problem of rendering your content more culturally relevant, at this moment, than anything else on a person's phone. Anyone who's sat across from someone, only to see that their companion turn their attention to a smartphone, understands this modern conundrum.

This isn't just a problem for sports. In an age where Netflix is producing some 700 original series next year, not to mention all the ones from HBO and Amazon and Hulu and FX and on and on, every content provider has to become more thoughtful and creative about how to manufacture desire on the part of the viewer. The temptation, in tech, is to use some recommendations and machine learning to pick content to present to any one viewer, but that is going to be wholly insufficient.

When all you have is a hammer, everything looks like a nail, they say. When what you possess is lots of software engineers gifted at crunching large data sets, everything can look like an ML problem. That leaves huge swaths of human psychology on the table. There are still so many opportunities for so many services to render their content more relevant to a larger audience, a scary proposition to those who already find so many of their apps addictive.

Again, different categories of content tend to resort to the same narrow band of strategies as their competitors, but when we live in an age where almost all content across all mediums act as substitute goods for each other, companies and creatives should be widening their net to learn from outside their category. The competition won't wrestle on your terms, the battle is asymmetric.

A full list of such strategies is a topic for another day, but I'd argue every company should be looking at everything from House of Highlights to infomercials to Buzzfeed to Disneyland theme parks to high fashion to Costco to Beyonce and Rihanna to the fine art world to YouTube vloggers like Logan Paul to the design of Fortnite to just about everything about Las Vegas to pop-up restaurants to limited edition sneaker drops to folks like Tyler Cowen and Ben Thompson.

If we, as consumers, are fighting to resist the Siren song, then on the flip side is a pitched battle to spin the Siren song that will rise above the din.

Now stop your ship and listen to our voices.
All those who pass this way hear honeyed song,
poured from our mouths.

Revisionist commentary

I don't know that I'm aware of enough entries in this category to even consider it one, but I'm a sucker for the union of political and film satire as embodied in alternate film commentaries.

I was reminded of it when seeing The People's History of Tattooine which was first one of those spontaneous, emergent forms of Twitter humor that always brightens that otherwise dystopic landscape.

What if Mos Eisley wasn’t really that wretched and it was just Obi Wan being racist again?
What do you mean, “these blaster marks are too precise to be made by Sand People?” Who talks like that?
also Sand People is not the preferred nomenclature.
They have a rich cultural history that’s led them to survive and thrive under spectacularly awful conditions.
Mos Eisley may not look like much but it’s a a bedroom community with decent schools and affordable housing.
You can just imagine Obi-Wan after years of being a Jedi on Coruscant being stuck in this place and just getting madder and madder.
yeah nobody cares that the blue milk is so much more artisanal on Coruscant
Obi-Wan only goes to Mos Eisley once every three months to get drunk and he basically becomes like Byron.

Years ago, I laughed at UNUSED AUDIO COMMENTARY BY HOWARD ZINN AND NOAM CHOMSKY, RECORDED SUMMER 2002 FOR THE FELLOWSHIP OF THE RING (PLATINUM SERIES EXTENDED EDITION) DVD, PART ONE (here is part 2, and here are all four of the parts of their commentary for Return of the King).

CHOMSKY: And here comes Bilbo Baggins. Now, this is, to my mind, where the story begins to reveal its deeper truths. In the books we learn that Saruman was spying on Gandalf for years. And he wondered why Gandalf was traveling so incessantly to the Shire. As Tolkien later establishes, the Shire’s surfeit of pipe-weed is one of the major reasons for Gandalf’s continued visits.
ZINN: You view the conflict as being primarily about pipe-weed, do you not?
CHOMSKY: Well, what we see here, in Hobbiton, farmers tilling crops. The thing to remember is that the crop they are tilling is, in fact, pipe-weed, an addictive drug transported and sold throughout Middle Earth for great profit.
ZINN: This is absolutely established in the books. Pipe-weed is something all the Hobbits abuse. Gandalf is smoking it constantly. You are correct when you point out that Middle Earth depends on pipe-weed in some crucial sense, but I think you may be overstating its importance. Clearly the war is not based only on the Shire’s pipe-weed. Rohan and Gondor’s unceasing hunger for war is a larger culprit, I would say.
CHOMSKY: But without the pipe-weed, Middle Earth would fall apart. Saruman is trying to break up Gandalf’s pipe-weed ring. He’s trying to divert it.
ZINN: Well, you know, it would be manifestly difficult to believe in magic rings unless everyone was high on pipe-weed. So it is in Gandalf’s interest to keep Middle Earth hooked.
CHOMSKY: How do you think these wizards build gigantic towers and mighty fortresses? Where do they get the money? Keep in mind that I do not especially regard anyone, Saruman included, as an agent for progressivism. But obviously the pipe-weed operation that exists is the dominant influence in Middle Earth. It’s not some ludicrous magical ring.

A bit more, because I can't help myself:

ZINN: Right. And here we receive our first glimpse of the supposedly dreadful Mordor, which actually looks like a fairly functioning place.
CHOMSKY: This type of city is most likely the best the Orcs can do if all they have are cliffs to grow on. It’s very impressive, in that sense.
ZINN: Especially considering the economic sanctions no doubt faced by Mordor. They must be dreadful. We see now that the Black Riders have been released, and they’re going after Frodo. The Black Riders. Of course they’re black. Everything evil is always black. And later Gandalf the Grey becomes Gandalf the White. Have you noticed that?
CHOMSKY: The most simplistic color symbolism.
ZINN: And the writing on the ring, we learn here, is Orcish — the so-called “black speech.” Orcish is evidently some spoliation of the language spoken in Rohan. This is what Tolkien says.

Somewhat related is this, The Passion of the Christ: Blooper Reel.

Christ, shackled to a stone, is being scourged by Roman soldiers. Blood runs down his gory back. His pain is palpable.
Jesus: [writhes in pain, hands shaking]
[Cell phone rings.]
Jesus: [hands shake furiously]
[Cell phone rings. Caviezel looks up, sheepish.]
Roman soldier: Jim? That you?
Jesus: Yeah.
[Cell phone rings.]
Soldier: Want me to get it?
Jesus: Yeah.
[Roman soldier gingerly reaches into Caviezel’s blood-soaked loincloth, pulls out phone and opens it, then holds the phone to Caviezel’s ear.]
Off Camera: [laughter]
Jesus: Hey, Mom.

Are there more in this genre? If so, please share!

Why Information Grows

It is hard for us humans to separate information from meaning because we cannot help interpreting messages. We infuse messages with meaning automatically, fooling ourselves to believe that the meaning of a message is carried in the message. But it is not. This is only an illusion. Meaning is derived from context and prior knowledge. Meaning is the interpretation that a knowledge agent, such as a human, gives to a message, but it is different from the physical order that carries the message, and different from the message itself. Meaning emerges when a message reaches a life-form or a machine with the ability to process information; it is not carried in the blots of ink, sound waves, beams of light, or electric pulses that transmit information.

From the book Why Information Grows by Cesar Hidalgo. I read this book a long ways back in 2017,  but it's of no less interest now.

And it is the arrow of complexity—the growth of information—that marks the history of our universe and species. Billions of years ago, soon after the Big Bang, our universe did not have the capacity to generate the order that made Boltzmann marvel and which we all take for granted. Since then, our universe has been marching toward disorder, as Boltzmann predicted, but it has also been busy producing pockets that concentrate enormous quantities of physical order, or information. Our planet is a chief example of such a pocket.

When one first encounters the second law of thermodynamics, it's easy to tumble into despair at the pointlessness of everything. With the universe fated to collapse into heat death eventually, what is the point of it all?

In this existential void, the presence of pockets of information and order can feel like symbols of rebellion, a raised fist spray painted on a fragment of wall that remains from a bombed-out building. In manifestations of order we see intent, in intent we interpret meaning, and in meaning we find comfort.

Information, when understood in its broad meaning as physical order, is what our economy produces. It is the only thing we produce, whether we are biological cells or manufacturing plants. This is because information is not restricted to messages. It is inherent in all the physical objects we produce: bicycles, buildings, streetlamps, blenders, hair dryers, shoes, chandeliers, harvesting machines, and underwear are all made of information. This is not because they are made of ideas but because they embody physical order. Our world is pregnant with information. It is not an amorphous soup of atoms, but a neatly organized collection of structures, shapes, colors, and correlations. Such ordered structures are the manifestations of information, even when these chunks of physical order lack any meaning.

There are plenty of books on information theory, and viewing the universe through the lens of information and computation is increasingly popular, but Hidalgo's book is more readable than most.

To battle disorder and allow information to grow, our universe has a few tricks up its sleeve. These tricks involve out-of-equilibrium systems, the accumulation of information in solids, and the ability of matter to compute.
It is the growth of information that unifies the emergence of life with the growth of economies, and the emergence of complexity with the origins of wealth.
In twenty-six minutes Iris traveled from the ancientness of her mother’s womb to the modernity of twenty-first-century society. Birth is, in essence, time travel.

Birth as time travel is one of those metaphors that, once heard, lodges in your mind like something you always knew. When Arnold Schwarzenegger time travels back from the future to the modern day in The Terminator, he arrives naked, like a newborn.

[It is unclear why a cyborg from the future speaks with a thick Austrian accent, one of the only mysteries I have always hoped would be explained in some throwaway expository joke. My guess is that the voice was a marketing Easter Egg, like celebrity voices in Waze, and someone forgot to flip the Terminator back to its factory default voice before sending it back in time.]

Humans are special animals when it comes to information, because unlike other species, we have developed an enormous ability to encode large volumes of information outside our bodies. Naively, we can think of this information as the information we encode in books, sheet music, audio recordings, and video. Yet for longer than we have been able to write we have been embodying information in artifacts or objects, from arrows to microwave ovens, from stone axes to the physical Internet. So our ability to produce chairs, computers, tablecloths, and wineglasses is a simple answer to the eternal question: what is the difference between us, humans, and all other species? The answer is that we are able to create physical instantiations of the objects we imagine, while other species are stuck with nature’s inventory.

Another reason humans wouldn't evolve on a gaseous planet like Jupiter, besides the fact that we'd just burn up, is that without any solids we'd have no way of encoding information to pass on to future generations. Therefore, any advanced civilization in the universe would, it would seem, live in physical conditions that allow for the formation of solids, but not solids that are too rigid.

The temperature band matters. We need solids that are malleable to encode richer sets of information. Add to that the ability to compute, which we see in all forms in our world, down to the cellular level, and suddenly you have life. There is logic to why we look for specific conditions in the universe as precursors for life, and it can be defined more broadly than just looking for water, which is a downstream condition. Further upstream we just want a planet with solids, in a particular band of temperatures.

Such conditions allow living creatures to record and pass along information to the next generation. When humans finally were able to do so, they in effect conquered time. No longer did the knowledge of one generation evaporate into the sinkhole of mortality.

The car’s dollar value evaporated in the crash not because the crash destroyed the atoms that made up the Bugatti but because the crash changed the way in which these were arranged. As the parts that made the Bugatti were pulled apart and twisted, the information that was embodied in the Bugatti was largely destroyed. This is another way of saying that the $2.5 million worth of value was stored not in the car’s atoms but in the way those atoms were arranged. That arrangement is information.
So the value of the Bugatti is connected to physical order, which is information, even though people still debate what information is. According to Claude Shannon, the father of information theory, information is a measure of the minimum volume of communication required to uniquely specify a message. That is, it’s the number of bits we need to communicate an arrangement, like the arrangement of atoms that made the Bugatti.
The group of Bugattis in perfect shape, however, is relatively small, meaning that in the set of all possible rearrangement of atoms—like people moving in a stadium—very few of these involve a Bugatti in perfect condition. The group of Bugatti wrecks, on the other hand, is a configuration with a higher multiplicity of states (higher entropy), and hence a configuration that embodies less information (even though each of these states requires more bits to be communicated). Yet the largest group of all, the one that is equivalent to people sitting randomly in the stadium, is the one describing Bugattis in their “natural” state. This is the state where iron is a mineral ore and aluminum is embedded in bauxite. The destruction of the Bugatti, therefore, is the destruction of information. The creation of the Bugatti, on the other hand, is the embodiment of information.

One can separate out the intrinsic value of an item, defined above as the rarity of the state of the configuration of that item, from the external value of an item, as defined by qualities such as symbolic or emotional ones, like nostalgia.

In Pulp Fiction, Bruce Willis risks life and limb to recover a watch given to him by his father. There's no evidence it's a particularly rare watch, he could likely buy another just like it, but its symbolic value to him is extrinsic to the item yet tethered to it the way a genie is trapped in a magic lantern (and that special meaning is conveyed in the now immortal speech by Christopher Walken).

Even the most rational people I know own something that's not physically rare but emotionally rich, a talisman or totem that they use to summon whatever power it holds, whether it be nostalgia or regret or some other enchantment known only to themselves.

What Shannon teaches us is that the amount of information that is embodied in a tweet is equal to the minimum number of yes-or-no questions that Brian needs to ask to guess Abby’s tweet with 100 percent accuracy. But how many questions is that?
Shannon’s theory tells us that we need 700 bits, or yes-or-no questions, to communicate a tweet written using a thirty-two-character alphabet. Shannon’s theory is also the basis of modern communication systems.

One mathematical reason for the rising usage of emoji in Twitter and other forms of online communication may be that it increases the amount of information that can be encoded in 140 (and now 280) characters.

You'll recall from earlier that the third of the three conditions that allow information to grow is the ability of matter to compute.

To illustrate the prebiotic nature of the ability of matter to process information, we need to consider a more fundamental system. Here is where the chemical systems that fascinated Prigogine come in handy. Consider a set of chemical reactions that takes a set of compounds {I} and transforms them into a set of outputs {O} via a set of intermediate compounds {M}. Now consider feeding this system with a steady flow of {I}. If the flow of {I} is small, then the system will settle into a steady state where the intermediate inputs {M} will be produced and consumed in such a way that their numbers do not fluctuate much. The system will reach a state of equilibrium. In most chemical systems, however, once we crank up the flow of {I} this equilibrium will become unstable, meaning that the steady state of the system will be replaced by two or more stable steady states that are different from the original state of equilibrium.13 When these new steady states emerge, the system will need to “choose” among them, meaning that it will have to move to one or the other, breaking the symmetry of the system and developing a history that is marked by those choices. If we crank up the inflow of the input compounds {I} even further, these new steady states will become unstable and additional new steady states will emerge. This multiplication of steady states can lead these chemical reactions to highly organized states, such as those exhibited by molecular clocks, which are chemical oscillators, compounds that change periodically from one type to another. But does such a simple chemical system have the ability to process information? Now consider that we can push the system to one of these steady states by changing the concentration of inputs {I}. Such a system will be “computing,” since it will be generating outputs that are conditional on the inputs it is ingesting. It would be a chemical transistor. In an awfully crude way this chemical system models a primitive metabolism. In an even cruder way, it is a model of a cell differentiating from one cell type to another—the cell types can be viewed abstractly as the dynamic steady states of these systems, as the complex systems biologist Stuart Kauffman suggested decades ago. Highly interacting out-of-equilibrium systems, whether they are trees reacting to the change of seasons or chemical systems processing information about the inputs they receive, teach us that matter can compute. These systems tell us that computation precedes the origins of life just as much as information does. The chemical changes encoded by these systems are modifying the information encoded in these chemical compounds, and therefore they represent a fundamental form of computation. Life is a consequence of the ability of matter to compute.

What's lovely about all of these conditions that allow information to grow is their seeming relevance to individuals and groups of individuals, like corporations or societies or markets.

Humans are concentrated bundles of information with compute power, and when we push ourselves out of equilibrium, we accumulate information. When we crank up our inputs and force ourselves out of our own equilibrium, as we do when we become students, we grow as we restore ourselves to steady state. Whenever anyone complains that they're in a rut, I always counsel them to force themselves out of equilibrium.


That covers much of the first half of the book, all fascinating. However, the part of the book that's of broader interest to a business audience is Hidalgo's discussion of the economy as a creator of information.

It's easiest to understand the information creation capacity of an economy by examining its outputs, and the simplest outputs to understand are physical products.

Thinking about products as crystals of imagination tells us that products do not just embody information but also imagination. This is information that we have generated through mental computations and then disembodied by creating an object that mimics the one we had in our head. Edible apples existed before we had a name for them, a price for them, or a market for them. They were present in the world. As a concept, apples were simply imported into our minds. On the other hand, iPhones and iPads are mental exports rather than imports, since they are products that were begotten in our minds before they became part of our world. So the main difference between apples and Apples resides in the source of their physical order rather than in their embodiment of physical order. Both products are packets of information, but only one of them is a crystal of imagination.

Like many navel gazers in the tech industry, I'm guilty of stereotyping companies. Apple's strength is integrated hardware and software, Google is the king of machine learning and crunching large data sets, Facebook is the social network to end all social networks, and Amazon is the everything platform.

However, if you haven't worked or been inside any of those companies, it's fairest to judge them as black boxes into which inputs disappear and come out as various outputs, usually products and services like gadgets or websites or applications. Everything else is a mild form of fan fiction. 

By analyzing a company's outputs, one can deduce a great deal about its capabilities. Hidalgo does the same but at the country level.

The idea of crystallized imagination tells us that a country’s export structure carries information about more than just its abundance of capital and labor. A country’s export structure is a fingerprint that tells us about the ability of people in that country to create tangible instantiations of imaginary objects, such as automobiles, espresso machines, subway cars, and motorcycles, and of course about the myriad of specific factors that are needed to create these sophisticated products. In fact, the composition of a country’s exports informs us about the knowledge and knowhow that are embodied in that country’s population.

A country that can export a product like an iPhone generally has greater generative power than one that can only export raw materials like bananas. The telltale clues to the economic potential of a country lie not in its imports but its exports.

So what has any of this to do with Chile? The only connection between Chile and the history of electricity comes from the fact that the Atacama Desert is full of copper atoms, which, just like most Chileans, were utterly unaware of the electric dreams that powered the passion of Faraday and Tesla. As the inventions that made these atoms valuable were created, Chile retained the right to hold many of these atoms hostage. Now Chile can make a living out of them. This brings us back to the narrative of exploitation we described earlier. The idea of crystallized imagination should make it clear that Chile is the one exploiting the imagination of Faraday, Tesla, and others, since it was the inventors’ imagination that endowed copper atoms with economic value. But Chile is not the only country that exploits foreign creativity this way. Oil producers like Venezuela and Russia exploit the imagination of Henry Ford, Rudolf Diesel, Gottlieb Daimler, Nicolas Carnot, James Watt, and James Joule by being involved in the commerce of a dark gelatinous goo that was virtually useless until combustion engines were invented. Making a strong distinction between the generation of value and the appropriation of monetary compensation helps us understand the difference between wealth and economic development. In fact, the world has many countries that are rich but still have underdeveloped economies. This is a distinction that we will explore in detail in Part IV. But making this distinction, which comes directly from the idea of crystallized imagination, helps us see that economic development is based not on the ability of a pocket of the economy to consume but on the ability of people to turn their dreams into reality. Economic development is not the ability to buy but the ability to make.

At a corporate level, I can recall an age when Sony was the king of consumer electronics the world over. I first coveted a Walkman, then later a Discman. Our family spent its formative years huddled around a giant (at the time) Sony Trinitron TV, and we were the envy of all my friends for owning one. I looked forward to any trip to Japan for a chance to walk the electronics districts to purchase the coolest gadgets on the planet, and for years I owned a Minidisc player model that you couldn't find in the U.S.

And then the world shifted, and the gadget which subsumed all other gadgets was the computer, and as it shrank in size while growing in computational power, the way we interacted with such devices increasingly became software-based. In that competition, the vector which mattered more than anything became software design, a skill Sony had not mastered.

The company that understood both software and hardware design better than any company in the world happened to be located in Silicon Valley, not Japan, and, after a long Wintel interregnum, caused by a number of business factors covered comprehensively elsewhere, Apple's unique skills found themselves in a universe they could really dent. And dent they did.

Thanks especially to the market opportunity created by the smartphone, which it seized with the iPhone, Apple not only surpassed Sony and moved the balance of power in consumer technology across the Pacific Ocean to American shores but became the most valuable company in the entire world.


Not all information is easily embodied. For example, for a while I puzzled over what I'll call the Din Tai Fung Paradox.

Din Tai Fung is a restaurant chain, and I visited the original outlet in Taipei decades ago with my mother. They're known for their Shanghainese soup dumplings, made with a very delicate wrap that somehow never breaks and dumps its precious cargo of pork broth until the moment at which you prod it with your chopsticks just so. Some will argue whether Ding Tai Fung is all that and a bucket of chicken, but at a minimum I find the menu to be satisfying comfort food done consistently, in a setting that is usually cleaner and more well-kept than your average chain restaurant outlet. You'll find superior deals from a street vendor and more elaborate preparation at a higher-end restaurant, but Din Tai Fung industrializes and scales a Chinese staple. We don't pay enough attention to scale.

The mystery is why Din Tai Fung has opened so few outlets; they've only dropped locations in a handful of cities in about ten countries in the world, and every Din Tai Fung is packed solid from open to close with the type of ever-present line of humans snaked outside the front door that you so rarely see at any restaurant, let alone a chain.

For a few months, a new outlet was rumored to be opening in San Francisco soon, and among my friends it was as momentous a rumor as if a new Star Wars teaser trailer had dropped. Ultimately, one opened in the Bay Area, but in Santa Clara instead of San Francisco. 

Which leads to a further mystery: why haven't any competing chains opened up to make the same items to fill the market void? I would never open a restaurant, but my family knows I'd make an exception if I were granted the opportunity to open a branch of Din Tai Fung anywhere. I bring it up every family gathering, when there's a lull in the conversation. Forget cryptocurrency, I want to mint me some Din Tai Fung coin.

At every Din Tai Fung I've been to, they have a glass window so you can look into the kitchen to see the soup dumplings being wrapped, always by kitchen staff wearing white uniforms, almost like lab assistants, an impression magnified by the branches that require face masks. It's rumored that the branches in Asia try to hire the tallest, most attractive men to man the soup dumpling assembly line, but it sounds about as true as a lot of things my aunts and uncles tell me, which is to say it's more credible than I'd care to admit.

The hermetic vibe behind the glass is as far from the vendor selling goods from a street cart as possible; some find street food charming but if you're taking this food to a global audience it needs to be sanitized or sterilized, the same way movies for the Chinese market strip out any storylines that might offend. It's not just the front of the house that's immaculate, the show behind the glass display says they have nothing to hide. It's the equivalent of the blackjack dealer at a casino clapping and turning their hands one way and the other before moving to the next table.

More interesting to me was that Din Tai Fung even doesn't even bother to hide the process behind its staple dish, the evidence is on thousands of smart phones by this point, everyone seems to stop to take a photo or video of the assembly line while waiting for their table.

And yet any Chinese food fan knows it's notoriously hard to find a good soup dumpling. In this age where recipes for almost anything are available online for free, why can't you find a good soup dumpling in most major cities in the world? Or, for that matter, a good burrito, or any dish you love? Why are these crystals of imagination so unevenly distributed when the recipes for making them so broadly available?

The answer, as any home chef who has tried to make a dish from some highfalutin cookbook knows, is simple: you can have the most precise ingredient list and directions and still struggle to make anything approaching what you ate, whether it came from a $400 tasting menu or a mainstream cookbook. Cooking is not nearly as deterministic as the term recipe implies.

Slight variations in environment, weather, ingredients, and cookware can lead to massive differences in the final product. Your oven may say 400 degrees, but the actual temperature inside, at the precise spot where you've placed your baking dish, may be different. That celery you use for your mirepoix today may not be as fresh as the celery you used last week. The air pressure where you're cooking on a particular day may differ from that where you live, the bacteria in the air may also vary. Great chefs appear on Top Chef and flail making dishes they've made hundreds of times in their own restaurant kitchen because every bit of environmental variation matters.

We may glamorize the image of the genius, heroic chef, working magic to create a delicious and beautifully plated dish that a waiter places before us with a balletic flourish, but the true value creation in a restaurant comes from translating that moment of genius into a rote, repeatable cycle. The popularity of sous vide as a cooking technique, even at high end restaurants, comes down its repeatable precision and accuracy. Ask any chef and they'll tell you the value of a line chef who can cook dozens of proteins to the right level of doneness every time given the high cost of fish and meat.

In addition to all those conditionals, much of cooking skill comes down to learned muscle memory and pattern recognition that can only be encoded in a human being through repeated trial and error. I tried to learn some of my favorite of my mother and grandmother's dishes by writing down recipes they dictated to me, but much was lost in translation. Like so much maternal magic, it could only be learned, truly, at their side, with an apron on, watching, imitating, botching one dish after another, until some of it seeped into my bones.

In a memorable segment from the documentary Jiro Dreams of Sushi, apprentice Daisuke Nakazawa is assigned the job of making egg sushi, or tamago. He believes it will be simple, but again and again, Jiro rejects his work. Nakazawa ends up making over 200 rejected samples until finally, one day, Jiro approves. Nakazawa cries in relief and joy.


Hardware and software are not like cooking. When knowledge and instructions can be encoded in bits, a level of precision is possible that is effectively, for the purposes of this discussion, deterministic. Manufacturing a hundred million iPhones is like food production, but not the type done in high end or home kitchens. Instead, it is more like producing a hundred million Oreos.

There is one country in the world where that many iPhones can be manufactured for the cost that allows Apple to reap its insane profits: China. I can't think of any other country in the world, not India or Mexico or the United States, or all of Europe together that can make that many iPhones for that price to meet the market demand year after year. Some countries have the labor but not the skills, others have the skills but not enough labor, and others just can't do the work as cheaply as China can.

Recall that the potential of an economy can be judged by the complexity of its exports. Based on that, it's difficult to imagine an economy outside of the U.S. with more potential than China. Some of the most complex products in the world, and the iPhone deserves to be on that list, are made in China.

I've backed many a Kickstarter hardware project, and without fail, every one has been made in China, usually Shenzhen. Inevitably, when the products are delayed, the project's creators will send an update with some photos of a few of them in China, at some plant, examining some part that will get the project back on track, or with their arms around a few Chinese plant managers giving a thumbs up sign.

Kickstarter often feels like an industrial and software design and marketing operation layer grafted on top of the manufacturing capabilities of Shenzhen. It is an early warning indicator of China's economic potential, and the gap that remains to realizing it.

Here is another. Foxconn assembles iPhones for Apple, and for their efforts they make anywhere from $8 to $30 per iPhone, depending on what article you believe. Whatever the figure, we know it is not far off from that.

Apple, in contrast, makes hundreds of dollars per iPhone. They earn that premium, many multiples what Foxconn earns, by virtue of being the ones who designed every aspect of the phone, from the software to the hardware. China can supply labor and even sometimes components, but the crystal of imagination that is the iPhone, perhaps the most valuable such crystal in the history of the world, comes almost entirely from the imagination of employees of Apple. Foxconn is one cog in a long supply chain, and that link isn't the one made of gold.

However, to even have the capability of making an iPhone for less than the cost of a lunch in San Francisco is a skill, one China has shown again and again. Many another country wishes it had such a demonstrated skill. Were China ever able to gain some of the software and industrial design skills of a company like Apple, they would be even more of an economic powerhouse than they are now.

That's a massive conditional. It's not something that can be learned by mere handwaving or even sheer industriousness. After all, Sony could return to its former glory, or Samsung be even more dominant globally, if software design skills were so easily learned.

Someone someday will write a book of the history of software design and how it came to be that Apple built up that capability more than any other technology company, and I'll be among its most eager readers because it's an untold story that holds the key to one of the greatest value creation stories in the history of business.

...our world is one in which knowledge and knowhow are “heavier” than the atoms we use to embody their practical uses. Information can be moved around easily in the products that contain it, whether these are objects, books, or webpages, but knowledge and knowhow are trapped in the bodies of people and the networks that these people form. Knowledge and knowhow are so “heavy” that when it comes to a simple product such as a cellphone battery, it is infinitely easier to bring the lithium atoms that lie dormant in the Atacama Desert to Korea than to bring the knowledge of lithium batteries that resides in Korean scientists to the bodies of the miners who populate the Atacaman cities of Antofagasta and Calama. Our world is marked by great international differences in countries’ ability to crystallize imagination. These differences emerge because countries differ in the knowledge and knowhow that are embodied in their populations, and because accumulating knowledge and knowhow in people is difficult. But why is it hard for us to accumulate the knowledge and knowhow we need to transform our dreams into reality?

If knowledge were so easy to transfer, I'd be a three-star Michelin Chef because someone gifted me a copy of the Eleven Madison Park cookbook.

Getting knowledge inside a human’s nervous system is not easy because learning is both experiential and social.5 To say that learning is social means that people learn from people: children learn from their parents and employees learn from their coworkers (I hope). The social nature of learning makes the accumulation of knowledge and knowhow geographically biased. People learn from people, and it is easier for people to learn from others who are experienced in the tasks they want to learn than from people with no relevant experience in that task. For instance, it is difficult to become an air traffic controller without learning the trade from other air traffic controllers, just as it is difficult to become a surgeon without having ever been an intern or a resident at a hospital. By the same token, it is hard to accumulate the knowhow needed to manufacture rubber tires or an electric circuit without interacting with people who have made tires or circuits.6 Ultimately, the experiential and social nature of learning not only limits the knowledge and knowhow that individuals can achieve but also biases the accumulation of knowledge and knowhow toward what is already available in the places where these individuals reside. This implies that the accumulation of knowledge and knowhow is geographically biased.

What governs the information production capacity of a country? Hidalgo coins two terms to analyze this problem. One is the personbyte.

We can simplify this discussion by defining the maximum amount of knowledge and knowhow that a human nervous system can accumulate as a fundamental unit of measurement. We call this unit a personbyte, and define it as the maximum knowledge and knowhow carrying capacity of a human.

The other term is firmbyte.

The limited proliferation of megafactories like the Rouge implies that there must be mechanisms that limit the size of the networks we call firms and make it preferable to disaggregate production into networks of firms. This also suggests the existence of a second quantization limit, which we will call the firmbyte. It is analogous to the personbyte, but instead of requiring the distribution of knowledge and knowhow among people, it requires them to be distributed among a network of firms.

Hidalgo then delves a bit into Coase's transaction cost theory of the firm. Traditionally, Coase's theory is used as a way to explain why firms are fundamentally limited in their size, the idea being that at some size, external transactions become cheaper than internal coordination costs and so it's more efficient to just transact externally rather than produce internally.

I'm not interested in examining that topic now. Instead, let's assume that firms all do have some asymptote in size beyond which Coase's anchor becomes too heavy. The interesting implication is that given the existence of a ceiling on the size of the firmbyte, if some chunk of knowledge exceeds that capacity then it can only be carried by a network of firms.

It's long been said that the center of the technology universe shifted from Boston's route 128 to Silicon Valley because California banned non-competes (here's one study). Hidalgo's theory of the finite compute ability of a network of humans and firms explains how this works. The free movement of employees in Silicon Valley allows the region's knowledge-carrying capacity to increase at the expense of any single firm's benefit. Per Coase, the cost of information movement in Silicon Valley, as embodied by an employee carrying a personbyte from one firm to the next, is lower than it was in the route 128 corridor.

Let's telescope back out to the country level. What applies at the regional or industry level holds at the country level. A country's knowledge carrying capacity, and thus its information production power, is influenced in part by the size of networks it can form.

In his 1995 book Trust, he [Francis Fukuyama] argues that the ability of a society to form large networks is largely a reflection of that society’s level of trust. Fukuyama makes a strong distinction between what he calls “familial” societies, like those of southern Europe and Latin America, and “high-trust” societies, like those of Germany, the United States, and Japan.
Familial societies are societies where people don’t trust strangers but do trust deeply the individuals in their own families (the Italian Mafia being a cartoon example of a familial society). In familial societies family networks are the dominant form of social organization where economic activity is embedded, and are therefore societies where businesses are more likely to be ventures among relatives. By contrast, in high-trust societies people don’t have a strong preference for trusting their kin and are more likely to develop firms that are professionally run. Familial societies and high-trust societies differ not only in the composition of the networks they form—as in kin and non-kin—but also in the size of the networks they can form. This is because the professionally run businesses that evolve in high-trust societies are more likely to result in networks of all sizes, including large ones. In contrast, familial societies are characterized by a large number of small businesses and a few dominant families controlling a few large conglomerates.
Yet, as we have argued before, the size of networks matters, since it helps determine the economic activities that take place in a location. Larger networks are needed to produce products of higher complexity and, in turn, for societies to achieve higher levels of prosperity. So according to Fukuyama, the presence of industries of different sizes indicates the presence of trust. In his own words: “Industrial structure tells an intriguing story about a country’s culture. Societies that have very strong families but relatively weak bonds of trust among people unrelated to one another will tend to be dominated by small, family-owned and managed business. On the other hand, countries that have vigorous private nonprofit organizations like schools, hospitals, churches, and charities, are also likely to develop strong private economic institutions that go beyond the family.”

In Tyler Cowen's conversation with economist Luigi Zingales, the latter hints at the limitations of familial economies in humorous fashion:

One friend of mine was saying that the demise of the Italian firm family structure is the demise of the Italian family. In essence, when you used to have seven kids, one out of seven in the family was smart. You could find him. You could transfer the business within the family with a little bit of meritocracy and selection.
When you’re down to one or two kids, the chance that one is an idiot is pretty large. The result is that you can’t really transfer the business within the family. The biggest problem of Italy is actually fertility, in my view, because we don’t have enough kids. If you don’t have enough kids, you don’t have enough people to transfer. You don’t have enough young people to be dynamic.
The Italian culture has a lot of defects, but the entrepreneurship culture was there, has been there, and it still is there, but we don’t have enough young people.

Low fertility's impact on economies is an issue globally, for example in Japan, but low trust outside of family is an even broader constraint on the knowledge carrying capacity of an economy. If you can't form as large a firm as another country, you can't compete in some businesses and the information producing capability of your economy has a lower ceiling.

If you run a company, you're no doubt familiar with the efficiency gains that arise when different employees and departments operate with high trust. Links form easily given an assumption of low risk, and knowledge moves more quickly, fluidly. Networks then facilitate trust in a virtuous cycle, an example being the military as an integrating institution in a multi-ethnic society.

Trust based on family has its own advantages, but for now I'm focused on an economy's ceiling, and networks that throw off the shackles of family-based firms can scale more. China not only has the population to supply a workforce that can assemble 100 million iPhones in a year, it has an economy that has moved beyond any roots in family-based trust.

Hidalgo's theory also explains why we don't see geographic leakage in industry know-how. Why aren't there Silicon Valleys everywhere?

The personbyte theory can also help us explain why large chunks of knowledge and knowhow are hard to accumulate and transfer, and why knowledge and knowhow are organized in the hierarchical pattern that is expressed in the nestedness of the industry-location data. This is because large chunks of knowledge and knowhow need large networks of people to be embodied in, and transferring or duplicating large networks is not as easy as transferring a small group of people. As a result, industry-location networks are nested, and countries move to products that are close by in the product space.

When the knowledge required to create something like an iPhone or a Hollywood film require the interaction of multiple people, with all their accumulated knowledge, seizing it for yourself isn't as easy as poaching one employee or sprinting off with a burning branch to give fire to mankind like Prometheus. Thus we understand why, besides its weather, LA has such a grip on filmmaking for the global market, why any handset manufacturers can't just reverse engineer an iPhone, and why, despite having hundreds of millions of users for iMessages, Apple isn't a credible threat in social networking.

When I study the Chinese tech market, I see an incredibly high ceiling. In fact, the Chinese consumer market in tech is more dynamic now than its counterpart in Silicon Valley. Once, China was belittled for simply copying all the US tech companies. It's true, there is a Chinese Bizarro instance of all successful U.S. tech companies, a Chinese Google, Facebook, Amazon, Twitter, Instagram, YouTube, and so on.

Thanks to that complex interaction of culture and technology, however, China now creates companies with no real American equivalents, and that extends beyond WeChat. China also has more dense cities than America, and density creates its own unique consumer technology opportunities. You'll run out of fingers and toes before you come to a Chinese city as small as New York City, and that matters when many social products piggyback and rely on metropolitan densities as dry kindling.

The competition between tech companies in the U.S. draws scandalized chatter from the peanut gallery, but the pace at which something like Snapchat Stories was copied in the U.S. would be seen as laughably slow in China. Not only are features of competitors routinely copied within a week or two in China, employees are poached all the time in what is closer to a true approximation of a free labor market than even Silicon Valley. Knowledge moves quickly, freely.

Three things, in my observation, hold Chinese companies back from capturing more share in the international market, outside Chinese borders. Two are related, and those are an internationally appealing industrial and software design aesthetic.

It's true, people who find many Chinese software UI's to be busy and crowded can't read Chinese and thus don't understand their localized appeal to the Chinese market. As eye-tracking studies have shown (example), Chinese users scan pages differently, and why shouldn't they considering the fundamental differences between an alphabetic language like English and a pictographic one like Chinese?

Still, most of the international market can't read Chinese. In my past work with UI designers in China, I find it takes more prompting to arrive at something more broadly intuitive for, say, an American market.

The same goes for industrial design, where, akin to the denser informational aesthetic of Chinese software, a somewhat more maximalist impulse takes hold. It's still quite common to walk into an Asian electronics superstore and see display signage that lists dozens of bullet points of features in selling a product. Contrast that to the almost non-existent signage in an Apple store for the extreme opposite.

A more tangible example is something like the user interface of everyone's favorite cooking gadget, the Instant Pot. I received one as a gift last year, and no doubt, I think it's a real value at $80 or so for the base model. For how harried we all feel, a pressure cooker is way more useful a kitchen gadget than most.

However, this is the instrument panel on the front of the Instant Pot.

In practice, it's even more confusing than it appears on first glance. I won't delve into it here, but given a simple design pass, the entire UI could be made much less intimidating, much more intuitive. Given the functionality of any pressure cooker, the functionality can be reduced to a much simpler instrumentation.

These two skill gaps in software and industrial design allow for a continued Kickstarter arbitrage opportunity that slaps a more internationally appealing software and industrial design, along with the more internationally appealing marketing, on top of Shenzhen's manufacturing capabilities.

The last thing holding back more Chinese startups, in my experience, is a shortage in the professional management class. I know, I know, MBA's get a bad rap in the domestic market, but having many CEO's with engineering background at so many Chinese tech companies comes with its own drawbacks.

This management gap may be related to the style of org structure and management which others have mentioned to me as less conducive to certain types of innovation, though it's harder for me to assess without having worked inside a Chinese company.

None of this needs to matter since the Chinese market is so massive. Chinese startups can succeed wildly without ever making a peep outside their home territory. Besides, how a design aesthetic and process can seep into a country's soul remains a mystery to me, but my guess is it's about as slow-moving as trying to produce a high quality soup dumpling in a new market.

Still, I love to muse on the potential of China. In fact, there is one Chinese company that best exemplifies the potential of the country's tech market on the international stage. Last summer, a friend of mine who had worked at this company heard I was in the market for a drone and referred me to a friend who was selling an extra, lightly used Mavic Pro which he'd purchased after he thought he'd lost his original.

I don't know the first thing about flying drones, but it took me all of fifteen minutes or so to get the thing up and flying around in the air, capturing 4K video. It is an fantastic feat of engineering, probably still the single drone I'd recommend to anyone looking to get into drone photography (though I recommend getting a bundle with some extra batteries and a carrying case).

DJI had a few advantages in surging to its undisputed leadership position in the global drone market. First, this is a product category where how well the product actually works is more complex a task than in others. Many drones just don't fly that well. Being an engineering-led company is a strength here, and as long as the industrial design is optimized for flight, it doesn't really matter if your product isn't the sleekest. You won't care what it looks like when it's several hundred feet up in the air.

Second, from a software design perspective, drone UI design can piggyback on flight UI templates that have been worked out over the years. One reason I was up and running so quickly with my Mavic Pro is that the flight sticks imitate video game flight controls. The UI isn't quite as simple as I'd like, but I was fluent much more quickly than the hefty page count of the instruction manual implied.

Estimates of DJI's market share vary, but they are all well north of 50%, and most of its competitors have either left the market entirely or are struggling to stay aloft, so to speak. Here is a vertically integrated Chinese company that most definitely makes more than the cost of cheap SF lunch that Foxconn makes for each iPhone it assembles.

Now, making drones and building smartphones or writing apps are not the same skills. Drones, as exciting as they are, still aren't the type of thing I'd recommend except to photography enthusiasts. And China is a long way from dominating the consumer electronics market internationally with a massive portfolio of products from domestic, vertically integrated companies.

But the ceiling at least exists. it's not theoretical. It's more than any other country outside the U.S. can say, and how China answers it is one of the questions which will determine the relative economic power of China versus the United States in this century.


That China can export drones more easily than it can import, say, the software and industrial design know-how of a company like Apple is, at a higher level, a fundamental question of how we can pass along knowledge of any sort. Why are we not better at transferring know-how to industrial workers who are out of a job, why hasn't the internet produced a global leveling of industrial know-how at the country level?

Hidalgo notes:

At a finer scale, economies still lack the intimate connection between knowhow and information that is embodied in DNA and which allows biological organisms to pack knowhow so tightly. A book on engineering, art, or music can help develop an engineer, artist, or musician, but it can hardly do so with the elegance, grace, and efficiency with which a giraffe embryo unpacks its DNA to build a giraffe. The ability of economies to pack and unpack knowhow by physically embodying instructions and templates as written information is much more limited than the equivalent ability in biological organisms.

We are nowhere near our maximum throughput for passing on our knowledge to our fellow man, let alone across the membrane between companies and economies. In The Matrix, with a few seconds of fluttering eyelids, Keanu Reeves downloads an entire martial art into his self.

Give a man a kung-fu, you making him Neo. Teach a man to kung-fu, you make him John Wick. 

That is the dream. Asky any parent who is in the midst of trying to get a three-year-old to eat their dinner without throwing half of it on the ground and they'll nod in agreement. What is our version of nature's DNA and cell school of knowledge compression and decompression?

One of the reality TV shows which I wish existed would be one in which a variety of masters in their field compete to take absolute novices from a standing start as far as possible in a finite period of time. Instead of Top Chef, in which contestants are all successful chefs already, I want three master chefs to have to each train a handful of complete cooking dunces over a several month process, and the teacher and winning student share the pot.

Each season could have variant skills. In another, maybe the world's top three piano teachers have to train people who've never played the piano in their lives to sight-reading Chopin. Bill Belichick and Nick Saban coach two youth league football teams to see which wins a season finale scrimmage. I'm sure some of you will write me to tell me some version of a show like this exists already, but I've seen some that come close, but almost all the ones I've seen spend much too little time on the actual instruction methodology and process, and that's where all the mystery and interest lies.

In future posts, I'll delve into some of the limitations I've observed in how we pass information among people, companies, and economies, and from one generation to the next. For now, I recommend picking up Hidalgo's book, and I hope to hear from you about some of the ways you've found to help grow information around you in more efficient ways.

My first podcast appearance

A few months ago David Perell emailed and asked if I'd like to be on his podcast, The North Star. He mentioned some of the other people he'd had on, so many of whom I admire, and I thought he had emailed the wrong person. But no, he had done his research and knew a lot about my preoccupations and obsessions.

David passed through San Francisco before this past holiday break on his way to Australia, and we chatted at my apartment for a night, much of which is captured in this podcast. I've had a small handful of invitations to be on podcasts, but it was always trickier when I was working given the varied concerns of corporate PR, and I always wanted to do my first podcast in person rather than remotely.

If you enjoy my work here at my site, perhaps you'd be game to sample me in another medium? We cover some of the topics I've covered here before, but we spend a bit more time on my personal history, anchoring those topics along my professional timeline. I had a lot of fun and hope to perhaps drop in on another podcast or two in 2018.

Beware the lessons of growing up Galapagos

In All the old rules about movie stardom are broken, part of Slate's 2017 Movie Club year end review, Amy Nicholson writes:

Lugging my $10 masterpiece back to the hotel, I thought about how most of the famous faces who represent the movies have been dead for 50 years. Marilyn’s smile sells shot glasses, clocks, calendars, posters, and shirts in stores from Sunset Boulevard to Buenos Aires, Tijuana to Taiwan. What modern actor could earn a seat at her table? The biggest stars of my lifetime—Julia Roberts, Brad Pitt, Nicolas Cage, Sandra Bullock—never graduated past magazine covers to souvenir magnets.

If Hollywood played by its old rules, I, Tonya’s Margot Robbie and Call Me by Your Name’s Armie Hammer should be huge stars. They’re funny, smart, self-aware, charismatic, and freakishly attractive. Yet, they feel like underdogs, and I’m trying to figure out why. Robbie has made intelligent choices. Her scene-stealing introduction as Leonardo DiCaprio’s trophy wife in Wolf of Wall Street. Her classic romantic caper with Will Smith in the underseen trifle, Focus. She even survived Suicide Squad with her dignity intact. In I, Tonya, she can’t outskate being miscast as Tonya Harding, but bless her heart for trying. As for Hammer, Kameron, your review of Call Me by Your Name called him, “royally handsome,” which seems right. He’s as ridiculously perfect as a cartoon prince, and I loved how Luca Guadagnino made a joke of how outlandish the 6-foot-5 blond looks in the Italian countryside. Whether he’s unfurling himself from a tiny Fiat or stopping conversation with his gangly dance moves, he can’t blend in—and good on him and Guadagnino for embracing it.

But even if Robbie and Hammer each claim an Oscar nomination this year, I suspect they’ll stay stalled out in this strange time when great actors are simply supporting players in a superhero franchise. I’m fascinated by Robbie and Hammer because they’re like fossils of some alpha carnivore that should have thrived. Does anyone else feel like the tectonic plates under Hollywood have shifted and we’re now staring at the evidence that everything we know is extinct? It’s not just that the old rules have changed—no new rules have replaced them. No one seems to know what works.

Nicholson goes on to cite Will Smith, who once had huge hits seemingly with every movie he made and who is now on a long cold streak.

I'm wary of all conclusions drawn about media in the scarcity age, including the idea that people went to see movies because of movie stars. It's not that Will Smith isn't charismatic. He is. But I suspect Will Smith was in a lot of hits in the age of scarcity in large part because there weren't a lot of other entertainment options vying for people's attention when Independence Day or something of its ilk came out, like clockwork, to launch the summer blockbuster season.

The same goes for the general idea that any one star was ever the chief engine for a film's box office. If the idea that people go see a movie just to see any one star was never actually true, we can stop holding the modern generation of movie stars to an impossible standard.

The same mistake, I think, is being made about declining NFL ratings. Owners blame players kneeling for the national anthem, but here's my theory: in an age of infinite content, NFL games measure up poorly as entertainment, especially for a generation that grew up with smartphones and no cable TV and thus little exposure to American football. If I weren't in two fantasy football leagues with friends and coworkers, I would not have watched a single game this season, and that's a Leftovers-scale flash-forward twist for a kid who once recorded the Superbowl Shuffle to cassette tape off a local radio broadcast just to practice the lyrics.

If you disregard any historical romantic notions and examine the typical NFL football game, it is mostly dead time (if you watch a cut-down version of a game using Sunday Ticket, only about 30 minutes of a 3 to 3.5 hr game involves actual game action), with the majority of plays involving action of only incremental consequence, whose skill and strategy on display are opaque to most viewers and which are explained poorly by a bunch of middle-aged white men who know little about how to sell the romance of the game to a football neophyte. Several times each week, you might see a player hit so hard that they lie on the ground motionless, or with their hands quivering, foreshadowing a lifetime of pain, memory loss, and depression brought on by irreversible brain damage. If you tried to pitch that show concept just on its structural merits you'd be laughed out of the room in Hollywood.

Cultural products must regenerate themselves for each successive age and generation or risk becoming like opera or the symphony is today. I had season tickets to the LA Phil when I lived in Los Angeles, and I brought a friend to the season opener one year. A reporter actually stopped us as we walked out to interview us about why we were there, so mysterious it was to see two attendees who weren't old enough to have been contemporaries of the composer of the music that night (Mahler).

Yes, football has been around for decades, but most of those were in an age of entertainment scarcity. During that time the NFL capitalized on being the only game in town on Sundays, capturing an audience that passed on the game and its liturgies to their children. Football resembles a religion or any other cultural social network; humans being a tribal creature, we find products that satisfy that need, and what are professional sports leagues but an alliance of clans who band together for the network effects of ritual tribal warfare?

Because of its long incubation in an era of low entertainment competition, the NFL built up massive distribution power and enormous financial coffers. That it is a cultural product transmitted by one generation to the next through multiple channels means it's not entirely fair to analyze it independent of its history; cultural products have some path dependence.

Nevertheless, even if you grant it all its tailwinds, I don't trust a bunch of rich old white male owners who grew up in such favorable monopolistic conditions to both understand and adapt in time to rescue the NFL from continued decline in cultural relevance. They are like tortoises who grew up in the Galapagos Islands, shielded on all sides from predators by the ocean, who one day see the moat dry up, connecting them all of a sudden to other continents where an infinite variety of fast-moving predators dwell. I'm not sure the average NFL owner could unlock an iPhone X, let alone understand the way its product moves through modern cultural highways.

Other major sports leagues are in the same boat though most aren't as oblivious as the NFL. The NBA has an open-minded commissioner in Adam Silver and some younger owners who made their money in technology and at least have one foot in modernity. As a sport, the NBA has some structural advantages over other sports (for example, the league has fewer players whose faces are seen and who are active on social media in an authentic way), but the league also helps by allowing highlights of games to be clipped and shared on social media and by encouraging its players to cultivate public personas that act as additional narrative fodder for audiences.

I remember sitting in a meeting with some NFL representatives as they outlined a long list of their restrictions for how their televised games could be remixed and shared by fans on social media. Basically, they wanted almost none of it and would pursue take-downs through all the major social media companies.

Make no mistake, one possible successful strategy in this age of abundant media is to double down on scarcity. It's often the optimal strategy for extracting the maximum revenue from a motivated customer segment. Taylor Swift and other such unicorns can only release their albums on CD for a window to maximize financial return from her superfans before releasing the album on streaming services, straight from the old media windowing playbook.

However, you'd better be damn sure your product is unique and compelling to dial up that tactic because the far greater risk in the age of abundance is that you put up walls around your content and set up a bouncer at the door and no one shows up because there are dozens of free clubs all over town with no cover charge.

Sports have long had one massive advantage in production costs over scripted entertainment like TV and movies, and that is that their narrative engine is a random number generator (RNG). If you want to produce the next hot streaming series, you have to pay millions of dollars to showrunners and writers to generate some narrative.

In sports, the narrative is embedded in the rules of the game. Some players will compete, and someone will win. It's the same script replayed every night, but the RNG produces infinite variations that then spin off infinite variations of the same narratives for why a game turned out one way or the other, just as someone has to make up a story every day to explain why the stock market went up or down. At last check, RNG hadn't found representation with CAA or WME or UTA and thus its services remain free.

Unfortunately for major sports, this advantage is now a weakness as sports narrative is much more brittle than its entertainment counterparts. Narrative is a hedge against disaggregation and unbundling, and that is a critical moat in this age of social media and the internet.

One way to measure entertainment value on this dimension is to ask whether you can read a summary of a narrative and enjoy it almost as much as consuming the original narrative in its native medium. My classic test of this is for movies and TV shows. If you can enjoy a movie just as much by reading the Wikipedia plot summary as by watching it, or if you can enjoy a TV shows almost as much by reading a recap than by bingeing it on your sofa, then it wasn't really that great a movie or TV show to begin with.

Instead of watching the entire last season of Game of Thrones when it returns in 2019, I offer you the alternative of just reading textual recaps to your hearts content online. Is that as enticing an alternative as actually watching all six or seven episodes? You'll ingest all the plot details either way, but for the vast majority of fans this would be a gut-wrenching downgrade.

My other test of narrative value is a variant of the previous compression test. Can you enjoy something just as much by just watching a tiny fraction of the best moments? If so, the narrative is brittle. If you can watch just the last scene of a movie and get most or all the pleasure of watching the whole thing, the narrative didn't earn your company for the journey.

Much more of sports fails this second test than many sports fans realize. I can watch highlights of most games on ESPN or HouseofHighlights on Instagram and extract most of the entertainment marrow and cultural capital of knowing what happened without having to sit through three hours of mostly commercials and dead time. That a game can be unbundled so easily into individual plays and retain most of its value to me might be seen as a good thing in the age of social media, but it's not ideal for the sports leagues if those excerpts are mostly viewed outside paywalls.

This is the bind for major sports leagues. On the one hand, you can try to keep all your content inside the paywall. On the other hand, doing so probably means you continue hemorrhaging  cultural share. This is the eternal dilemma for all media companies in the age of infinite content.

Two nights ago, I watched a clip of multiple angles of Tua Tagovailoa ripping a laser beam of a pass to win the National Championship for Alabama. I didn't watch it live, or on ESPN. I watched it on HouseofHighlights on Instagram, where, instead of hearing some anchor on Sportscenter basically tell me what I can see with my own eyes, the video spins around after a moment to reveal the stunned face of the fan who just witnessed the pass live, reaction videos being a new sort of genre which allows a person in the video to act as the emoji reaction caption from within the video itself, speaking a visual language that most young people of this YouTube/Snapchat generation are already familiar with but which traditional media doesn't notice, let alone grok.

This disaggregation problem extends to ESPN, currently still the 400 pound gorilla in the sports media jungle (reminder, there are no 800 lb gorillas). The network suspended Jemele Hill for tweeting something negative about Trump, using the same playbook as the NFL, who threatened players with suspension for kneeling for the national anthem. Both believed these actions on the part of their talent were harming the value of their product.

The irony is that if both ESPN and the NFL had let these things play out naturally, I suspect at worst it would have been neutral, and at best it might have increased their ratings. For the NFL, the ties to modern movements for social justice might have kept the league and its games in the national conversation and made it tangentially relevant to the next generation. The most culturally relevant bit of Sportscenter today may just be the Sportscenter Top 10, as athletes who make a stunning play routinely tell reporters they are excited to see if they'll be featured on that evening's roundup of the top 10 plays.

Unfortunately, many athletes already see an appearance in HouseofHighlights as the social media alternative to appearing in the Sportscenter Top 10. If you follow top athletes on Instagram, you can see which of them favorite posts on HouseofHighlights. Lebron James routinely favorites posts, as do many other stars. Since many of those athletes follow each other on Instagram, that feature of Instagram produces common knowledge. It's not just that Donovan Mitchell knows that Lebron James favorited a HouseofHighlights clip of him dunking, it's that Mitchell knows that James knows that Mitchell knows and so on.

For ESPN, hewing to the idea that only highlights presented dispassionately or games broadcast respectfully are key to their value is a risky one. Not that they haven't generated a ton of wealth from doing so, and not that TV broadcast rights to major sports aren't still extremely valuable, but those are much more fixed commodities, available to the highest bidder, and ones whose value are close to their peaks, if not past them. This can't be a complete surprise within the four walls of their corporate offices given how much salary and air time they devote to blowhards like Stephen A. Smith and Skip Bayless, but their hesitance to lean into cultivating more original voices will haunt them in the long run. The average caption on an Instagram clip of a major sports league highlight is about twice as likely to be fresh and contextually humorous to a young person than any amount of generic sportscaster hooey spouted on ESPN.

This vulnerability extends to their online presences. I still visit on the web and on my mobile devices to get my sports news roundup each day, but sometime in the past few years, the designs of all these presences shifted dramatically. Gone was a hierarchical layout with different sized headlines and groupings of stories. In its place is a long center gutter of updates from a variety of sports leagues, in modern news feed style.

One can see why they went this way, it made ESPN more current, allowing them to push the latest stories to the top of the page to compete with people getting more current updates from Twitter and other social media sites. For a smartphone, in particular, with its limited screen space, it's not easy to block content into multiple sections on one page.

However, the moment you copy someone else's design, you've shifted the terms of the debate in their favor. In a previous era, ESPN's visually distinct information hierarchy set itself up as the authority on what stories mattered. In the new design, what matters skews towards what's the last thing to happen. It's all flow.

To some extent, in our hyper-personalized world, the era of any media entity deciding what stories matter more than others was always going to decline from what might be seen now as a temporary heyday. I care more about Chicago sports teams and Stanford given my background, so having those elements given more prominence was a notable improvement in the site's newly personalized design. Still, what is lost is that sense of authority, that ESPN sets the terms of the debate. Humans remain a social animal, and we take cues about what matters from our each other, including our media entities. ESPN has ceded more and more of the work of determining our sports Schelling points to other entities.

While this may sound grim, the major sports, their respective leagues, and ESPN all have a fairly solid near term window. For one thing, sports is still the highest volume, highest popularity real-time entertainment. As such, it remains a linchpin of many entertainment packages including cable bundles, and so we'll see various media companies pouring money into it until it can't hold things together anymore. We may even see the prices bid even higher for some time as often happens for assets being milked for their last but fleeting window of cultural scarcity.

A second and less discussed factor is that most young tech CEO's don't know the first thing about sports. They, like a sizable part of Silicon Valley (the group that tweets #sportsball whenever Twitter is inundated with reactions to some notable sports event), grew up with other interests. Without that intuitive sense of sports' place in culture, they aren't as attuned to the opportunities in that category.

This provides the leagues opportunities to swindle the tech companies for a while longer, an example being the rights to stream Thursday Night Football, which a series of tech companies from Yahoo to Twitter to Amazon have (probably) overpaid for the last few seasons. As Patrick Stewart said in L.A. Story, "You think with a statement like this you can have the duck?!" The chef says, "He can have the chicken!" Thursday Night Football is zee chicken of the NFL broadcast portfolio, but the restaurant is still called L'Idiot.

This happened for tech companies when they tried to add film and television to their portfolio, too. They routinely paid fortunes for the rights to back seasons of shows that are no longer relevant anymore. When I was at Hulu, I could only shake my head when I heard the asking price for all the back seasons of Seinfeld. Years later, long after I'd left, Hulu paid multiples of that. The cultural decay curve for content in this age of abundance is accelerating by the day, and there is no equivalent of botox to ward it off.

Given market feedback, however, such temporary arbitrage never lasts long. The days of the NFL strong-arming its partners to overpay for the most meager of rights are coming to an end. The thing about setting up a moat around your content is that the moment your cultural value crosses its peak, the moat becomes a set of prison bars. The flywheel loop can turn just as furiously counter-clockwise as clockwise.

And one of these days, a tech company will look at ESPN's homepage and notice how much it looks like their own. If they just put a bit more structure around it, could they satisfy that sports itch for their captive audience which already check in with them multiple times a day?

It seems implausible today, but look at what happened in film and television. For the longest time, so many tech companies were guilty of exactly what Hollywood accused them of, not understanding how film and television is made and marketed, how that industry creates demand for its product. Like all engineering led-cultures, Silicon Valley suspected Hollywood of not being data-driven enough, and many suspected that upstream process failures were responsible for failed releases. Half a film's budget is spent on prints and marketing? What a waste! (Engineers despise marketing.)

Forget that most of these people in tech had never been on a film set, or sat inside a writer's room, or seen the volumes of market research done before any film's release. It's all just content, let's just crowd source some alternatives. Or, if we produce some premium content, what's needed is earlier crowd-sourced feedback. Hundreds of millions of dollars were wasted before Silicon Valley realized they didn't know what they were doing.

Fortunately, all it cost them was some money and some time, something most of the incumbents have a surplus of. Now they write checks to creatives in Hollywood and leave them alone to do what they do very well already. Machine learning improves with data even when the algorithms are off, and so do most tech companies.

I am a lifelong lover of media in all its forms, and sports in particular was central to how I assimilated into America. It has long served as cultural connective tissue between me and friends, family, and strangers. But if I had an easy way to short all the major sports leagues over the next decade, I would. Nostalgia serves many purposes, but its most dangerous one is wrapping us in a memory of a time when we were still relevant.

Drawing invisible boundaries in conversational interfaces

One of the things anyone who has worked on textual conversation interfaces, like chatbots, will tell you is that the challenge is dealing with the long tail of crazy things people will type. People love to abuse chatbots. Something about text-based conversation UI's invites Turing tests. Every game player remembers the moment they first abandoned their assigned mission in Grand Theft Auto to start driving around the city crashing into cars, running over pedestrians, just to exercise their freedom and explore just what happens when they escape the plot mechanic tree.

However, this type of user roaming or trolling happens much less with voice interfaces. Sure, the first time a user tries Siri or Alexa or whatever Google's voice assistant is called (it really needs a name, IMO, to avoid inheriting everything the word "Google" stands for), they may ask something ridiculous or snarky. However, that type of rogue input tends to trail off quickly, whereas it doesn't in textual conversation UI's.

I suspect some form of the uncanny valley and blame the affordances of text interfaces. Most text conversation UI's are visually indistinguishable from those of a messaging UI used to communicate primarily with other human beings. Thus it invites the user to probe its intelligence boundaries. Unfortunately, the seamless polish of the UI isn't matched by the capabilities of chatbots today, most of which are just dumb trees.

On the other hand, none of the voice assistants to date sounds close to replicating the natural way a human speaks. These voice assistants may have more human timbre, but the stiff elocution, the mispronunciations, the frequent mistakes in comprehension, all quickly inform the user that what they are dealing with is something of quite limited intelligence. The affordances draw palpable, if invisible, boundaries in the user's mind, and they quickly realize the low ROI on trying anything other than what is likely to be in the hard-coded response tree. In fact, I'd argue that the small jokes that these UI's insert, like answering random questions like "what is the meaning of life?" may actually set these assistants up to disappoint people even more by encouraging more such questions the assistant isn't ready to answer (I found it amusing when Alexa answered my question, "Is Jon Snow dead?" two seasons ago, but then was disappointed when it still had the same abandoned answer a season later, after the question had already been answered by the program months ago).

The same invisible boundaries work immediately when speaking to one of those automated voice customer service menus. You immediately know to speak to these as if you're addressing an idiot who is also hard of hearing, and the goal is to complete the interaction as quickly as possible, or to divert to a human customer service rep at the earliest possible moment.

[I read on Twitter that one shortcut to get to a human when speaking to an automated voice response system is to curse, that the use of profanity is often a built-in trigger to turn you over to an operator. This is both an amusing and clever design but also feels like some odd admission of guilt on the part of the system designer.]

It is not easy, given the simplicity of textual UIs, to lower the user's expectations. However, given where the technology is for now, it may be necessary to erect such guardrails. Perhaps the font for the assistant should be some fixed-width typeface, to distinguish it from a human. Maybe some mechanical sound effects could convey the robotic nature of the machine writing the words, and perhaps the syntax should be less human in some ways, to lower expectations.

One of the huge problems with voice assistants, after all, is that the failures, when they occur, feel catastrophic from the user perspective. I may try a search on Google that doesn't return the results I want, but at least something comes back, and I'm usually sympathetic to the idea that what I want may not exist in an easily queryable form on the internet. However, though voice assistant errors occur much less frequently than before, when they do, it feels as if you're speaking to a careless design, and I mean careless in all sense of the word, from poorly crafted (why didn't the developer account for this obvious query) and uncaring (as in emotionally cold).  

Couples go to counseling over feeling as if they aren't being heard by each other. Some technology can get away with promising more than they can deliver, but when it comes to tech that is built around conversation, with all the expectations that very human mode of communication has accrued over the years, it's a dangerous game. In a map of the human brain, the neighborhoods of "you don't understand" and "you don't care" share a few exit ramps.

10 more browser tabs

Still trying to clear out browser tabs, though it's going about as well as my brief flirtation with inbox zero. At some point, I just decided inbox zero was a waste of time, solving a problem that didn't exist, but browser tab proliferation is a problem I'm much more complicit in.

1. Why the coming-of-age narrative is a conformist lie

From a more sociological perspective, the American self-creation myth is, inherently, a capitalist one. The French philosopher Michel Foucault theorised that meditating and journalling could help to bring a person inside herself by allowing her, at least temporarily, to escape the world and her relationship to it. But the sociologist Paul du Gay, writing on this subject in 1996, argued that few people treat the self as Foucault proposed. Most people, he said, craft outward-looking ‘enterprising selves’ by which they set out to acquire cultural capital in order to move upwards in the world, gain access to certain social circles, certain jobs, and so on. We decorate ourselves and cultivate interests that reflect our social aspirations. In this way, the self becomes the ultimate capitalist machine, a Pierre Bourdieu-esque nightmare that willingly exploits itself.
‘Growing up’ as it is defined today – that is, as entering society, once and for all – might work against what is morally justifiable. If you are a part of a flawed, immoral and unjust society (as one could argue we all are) then to truly mature is to see this as a problem and to act on it – not to reaffirm it by becoming a part of it. Classically, most coming-of-age tales follow white, male protagonists because their integration into society is expected and largely unproblematic. Social integration for racial, sexual and gender minorities is a more difficult process, not least because minorities define themselves against the norm: they don’t ‘find themselves’ and integrate into the social context in which they live. A traditional coming-of-age story featuring a queer, black girl will fail on its own terms; for how would her discovering her identity allow her to enter a society that insists on marginalising identities like hers? This might seem obvious, but it very starkly underscores the folly of insisting on seeing social integration as the young person’s top priority. Life is a wave of events. As such, you don’t come of age; you just age. Adulthood, if one must define it, is only a function of time, in which case, to come of age is merely to live long enough to do so.

I've written about this before, but almost always, the worst type of film festival movie is about a young white male protagonist coming of age. Often he's quiet, introverted, but he has a sensitive soul. As my first year film school professor said, these protagonists are inert, but they just "feel things." Think Wes Bentley in American Beauty filming a plastic bag dancing in the wind for fifteen minutes with a camcorder, then showing it to a girl as if it's Citizen Kane.

If they have any scars or wounds, they are compensated for with extreme gifts. Think Ansel Elgort in Baby Driver; cursed with tinnitus since childhood, he listens to music on a retro iPod (let's squeeze some nostalgic product placement in here, what the hell, we're also going to give him a deaf black foster father to stack the moral cards in his favor, might as well go all the way) and is, that's right, the best getaway driver in the business.

Despite having about as much personality as a damp elephant turd, their beautiful souls are both recognized and extracted by a trope which this genre of film invented just for this purpose, the manic pixie dream girl.

[Nathan Rabin, who invented the term manic pixie dream girl, has since disavowed the term as sometimes misogynist, and it can be applied too broadly like a hammer seeking nails, but it doesn't undo the reality that largely white male writing blocs, from guilds to writer's rooms, aren't great at writing women or people of color with deep inner lives.]

This is tangential to the broader point, that the coming-of-age story as a genre is, in and of itself, a lie. It reminds me of the distinction between Finite and Infinite Games, the classic book from James Carse. The Hollywood film has always promised a finite game, and thus it's a story that must have an ending. Coming-of-age is an infinite game, or at least until death, and so we should all be skeptical of its close-ended narrative.

(h/t Michael Dempsey)

2. Finite and Infinite Games and The Confederate

This isn't a browser tab, really, but while I'm on the topic of Carse's Finite and Infinite Games, a book which provides a framework with which so much of the world can be bifurcated, and while I'm thinking about the white male dominated Hollywood profession, I can't help but think of the TV project The Confederate, by the showrunners of Game of Thrones.

"White people” is seen by many whites as a pejorative because it lowers them to a racial class whereas before they were simply the default. They are not accustomed to having spent their entire lives being named in almost every piece of culture as a race, the way women, people of color, and the union of the two are, every single day, by society and culture.

All Lives Matter retort to Black Lives Matter is to pretend that we're all playing the same finite game when almost everyone who are losers in that game know it is not true. Blacks do not feel like they “won” the Civil War; every day today they live with the consequences and the shadow of America's founding racism, every day they continue to play a game that is rigged against them. That is why Ta Nehisi Coates writes that the question of The Confederate is a lie, and that only the victors of this finite game of America would want to relitigate the Civil War in some Alt History television show for HBO. It's as if a New England Patriot fan asked an Atlanta Falcons fan to watch last year's Super Bowl again, with Armie Hammer playing Tom Brady.

“Give us your poor, your huddled” is a promise that the United States is an infinite game, an experiment that struggles constantly towards bettering itself, evening the playing field, such that even someone starting poor and huddled might one day make a better life and escape their beginning state. That is why Stephen Miller and other white nationalists spitting on that inscription on the Statue of Liberty is so offensive, so dangerous.

On society, Carse writes:

The prizes won by its citizens can be protected only if the society as a whole remains powerful in relation to other societies. Those who desire the permanence of their prizes will work to sustain the permanence of the whole. Patriotism in one or several of its many forms (chauvinism, racism, sexism, nationalism, regionalism) is an ingredient in all societal play. 
Because power is inherently patriotic, is is characteristic of finite players to seek a growth of power in a society as a way of increasing the power of a society.

Colin Kaepernick refusing to stand for the National Anthem is seen as unpatriotic by many in America, including the wealthy white owners of such teams, which is not surprising, as racism is a form of patriotism, per Carse, and part and parcel of American society when defined as a finite game.

Donald Trump and his large adult sons are proof of just how powerful the inheritance of title and money are in America, and the irony that they are elected by those who feel that successive rounds of finite games have started to be rigged against them is not lost on anyone, not even, I suspect, them. One could argue they need to take a lesson from those oppressed for far longer as to how a turn to nihilism works out in such situations.

Those attacking Affirmative Action want to close off the American experiment and turn it into a series of supposedly level finite games because they have accumulated a healthy lead in this game and wish to preserve it in every form.

White nationalists like Trump all treat America as not just a finite game, but a zero sum finite game. The idea of immigrants being additive to America, to its potential, its output, is to treat America as an infinite game, open-ended. The truth lies, as usual, between the poles, but closer to the latter.

Beware the prophet who comes with stories of zero-sum games, or as Jim Collins once wrote, beware the "tyranny of the or." One of my definitions of leadership is the ability to turn zero-sum into positive sum games.

3. Curb Your Enthusiasm is Running Out of People to Offend

Speaking of fatigue with white male protagonists:

But if Larry David’s casual cruelty mirrors the times more than ever, the show might still fit awkwardly in the current moment. Watching the première of Season 9 on Sunday night, I kept thinking of a popular line from George Costanza, David’s avatar on “Seinfeld”: “You know, we’re living in a society!” Larry, in this first episode of the season, seems to have abandoned society altogether. In the opening shot, the camera sails over a tony swath of L.A., with no people and only a few cars visible amid the manicured lawns and terra-cotta roofs. It descends on Larry’s palatial, ivy-walled house, where he showers alone, singing Mary Poppins’s “A Spoonful of Sugar” and bludgeoning a bottle of soap. (Its dispenser pump is broken—grounds for execution under the David regime.) He’s the master of his domain, yes, but only by default: no one else is around.
“Curb” has always felt insulated, and a lot of its best jokes are borne of the fact that Larry’s immense wealth has warped his world view over the years. (On the most recent season he had no compunction about spending a princely sum on Girl Scout Cookies, only to rescind the order out of spite.) But the beginning of Season 9 offers new degrees of isolation. Like a tech bro ensconced in a hoodie and headphones, Larry seems to have removed himself almost entirely from public life. Both “Curb” and “Seinfeld” like to press the limits of etiquette and social mores, but the latter often tested these on subway cars and buses, in parks or on the street. Much of “Curb,” by contrast, unfolds in a faceless Los Angeles of air-conditioned mansions, organic restaurants, and schmoozy fund-raisers, a long chain of private spaces. The only time Larry encounters a true stranger, it’s in the liminal zone between his car and the lobby of Jeff’s office. She’s a barber on her way to see Jeff at work—even haircuts happen behind closed doors now.

Groundhog Day, one of the great movies, perhaps my favorite Christmas movie of all time, has long been regarded a great Buddhist parable

Groundhog Day is a movie about a bad-enough man—selfish, vain, and insecure—who becomes wise and good through timeless recurrence.

If that is so, then Curb Your Enthusiasm is its dark doppelganger, a parable about the dark secret at the heart of American society, that no person, no matter how selfish, vain, and petty, can suffer the downfall necessary to achieve enlightenment, if he is white and a man. 

In this case, he is a successful white man in Hollywood, Larry David, and each episode of Curb Your Enthusiasm is his own personal Groundhog Day. Whereas Bill Murray wakes up each morning to Sonny and Cher, trapped in Punxsutawney, Pennsylvania, around small town people he dislikes, in a job he feels superior to, Larry David wakes up each morning in his Los Angeles mansion, with rewards seemingly only proportionate to the depths of his pettiness and ill humor. Every episode, he treats all the friends and family around him with little disguised disdain, and yet the next episode, he wakes up in the mansion again.

Whereas Bill Murray eventually realizes the way to break out of his loop is to use it for self-improvement, Larry David seems to be striving to fall from grace by acting increasingly terrible and yet finds himself back in the gentle embrace of his high thread count sheets every morning.

Curb Your Enthusiasm has its moments of brilliance in its minute dissection of the sometimes illogical and perhaps fragile bonds of societal goodwill, and its episode structure is often exceedingly clever, but I can't help watching it now as nothing more than an acerbic piece of performance art, with all the self absorption that implies.

Larry David recently complained about the concept of first world problems, which is humorous, as it's difficult to think of any single person who has done as precise a job educating the world on what they are.

[What about Harvey Weinstein and Louis C.K., you might ask? Aren't they Hollywood royalty toppled from lofty, seemingly untouchable perches? The story of how those happened will be the subject of another post, because the mechanics are so illuminating.]

4. Nathan for You

I am through season 2 of Nathan for You, a Comedy Central show that just wrapped its fourth and final season. We have devalued the term LOL with overuse, but no show has made me literally laugh out loud by myself, on the sofa, as this, though I've grinned in pleasure at certain precise bits of stylistic parody of American Vandal.

Nathan Fielder plays a comedic version of himself. In the opening credits, he proclaims:

My name is Nathan Fielder, and I graduated from one of Canada's top business schools with really good grades [NOTE: as he says this, we see a pan over his transcript, showing largely B's and C's]. Now I'm using my knowledge to help struggling small business owners make it in this competitive world.

If you cringed while watching a show like Borat or Ali G, if you wince a bit when one of the correspondents on The Daily Show went to interview some stooge, you might believe Nathan For You isn't, well, for you. However, the show continues to surprise me.

For one thing, it's a deeply useful reminder of how difficult it is for physical retailers, especially mom and pop entrepreneurs, to generate foot traffic. That they go along with Fielder's schemes is almost tragic, but more instructive.

For another, while almost every entrepreneur is the straight person to Fielder's clown, I find myself heartened by how rarely one of them just turns him away outright. You can see the struggle on each of their faces, as he presents his idea and then stares at them for an uncomfortably long silence, waiting for them to respond. He never breaks character. Should they just laugh at him, or throw him out in disgust? It almost never happens, though one private investigator does chastise Fielder for being a complete loser.

On Curb Your Enthusiasm, Larry David's friends openly call him out for his misanthropy, yet they never abandon him. On Nathan For You, small business owners almost never adopt Fielder's ideas at the end of the trial. However, they almost never call him out as ridiculous. Instead, they try the idea with a healthy dose of good nature at least once, or at least enough to capture an episode's worth of material.

In this age of people screaming at each other over social media, I found this reminder of the inherent decency of people in face to face situations comforting and almost reassuring. Sure, some people are unpleasant both online and in person, and some people are pleasant in person and white supremacists in private.

But some people try to see the best in each other, give others the benefit of the doubt, and on such bonds a civil society are maintained. That this piece of high concept art could not fence in the humanity and real emotion of all the people participating, not even that of Fielder, is a bit of pleasure in this age of eye-rolling cynicism.

[Of course, these small business owners are aware a camera is on them, so the Heisenberg Principle of reality television applies. That a show like this, which depend on the subjects not knowing about the show, lasted four full seasons is a good reminder of how little-watched most cultural products are in this age of infinite content.]

BONUS CONTENT NO ONE ASKED FOR: Here is my Nathan for You idea: you know how headline stand-up comedians don't come on stage to perform until several lesser known and usually much lousier comics are trotted out to warm up the crowd? How, if you attend the live studio taping of a late night talk show like The Daily Show or The Tonight Show, some cheesy comic comes out beforehand to get your laugh muscles loose, your vocal chords primed? And when the headliner finally arrives, it comes as sweet relief?

What if there were an online dating service that provided such a warm-up buffoon for you? That is, when you go on a date, before meeting your date, first the service sends in a stand-in who is dull, awkward, a turn off in every way possible? But a few minutes into what seems to be a disastrous date, you suddenly show up and rescue the proceedings?

It sounds ridiculous, but this is just the sort of idea that Nathan for You would seem to go for. I haven't watched seasons 3 and 4 yet, so if he does end up trying this idea in one of those later episodes, please don't spoil it for me. I won't even be mad that my idea was not an original one, I'll be so happy to see actual footage of it in the field.

5. The aspect ratio of 2:00 to 1 is everywhere

I first read the case for 2:00 to 1 as an aspect ratio when legendary cinematographer Vittorio Storaro advocated for it several years ago. He anticipated a world where most movies would have a longer life viewed on screens at home than in movie theaters, and 2:00 to 1, or Univisium, is halfway between the typical 16:9 HDTV aspect ratio and Panavision, or 2:35 to 1.

So many movies and shows use 2:00 to 1 now, and I really prefer it to 16:9 for most work.

6. Tuning AIs through captchas

Most everyone has probably encountered the new popular captcha which displays a grid of photos and asks you to identify which contain a photo of a store front. I just experienced it recently signing up for HQTrivia. This breed of captcha succeeds the wave of captchas that showed photos of short strings of text or numbers and asked you to type in what you saw, helping to train AIs trying to learn to read them. There are variants of the store front captcha: some ask you to identify vehicles, others to identify street signs, but the speculation is that Google uses these to train the "vision" of its self-driving cars.

AI feels like magic when it works, but underrated is the slow slog to take many AI's from stupid to competent. It's no different than training a human. In the meantime, I'm looking forward to being presented with the captcha that shows two photos, one of a really obese man, the other of five school children, with this question above them: "If you had to run over and kill the people in one of these photos, which would you choose?"

7. It's Mikaela Shiffrin profile season, with this one in Outside and this in the New Yorker

I read Elizabeth Weil's profile of Shiffrin in Outside first:

But the naps: Mikaela not only loves them, she’s fiercely committed to them. Recovery is the most important part of training! And sleep is the most important part of recovery! And to be a champion, you need a steadfast loyalty to even the tiniest and most mundane points. Mikaela will nap on the side of the hill. She will nap at the start of the race. She will wake up in the morning, she tells me after the gym, at her house, while eating some pre-nap pasta, “and the first thought I’ll have is: I cannot wait for my nap today. I don’t care what else happens. I can’t wait to get back in bed.”
Mikaela also will not stay up late, and sometimes she won’t do things in the after­noon, and occasionally this leads to more people flipping out. Most of the time, she trains apart from the rest of the U.S. Ski Team and lives at home with her parents in Vail (during the nine weeks a year she’s not traveling). In the summers, she spends a few weeks in Park City, Utah, training with her teammates at the U.S. Ski and Snowboard Center of Excellence. The dynamic there is, uh, complicated. “Some sports,” Mikaela says, “you see some athletes just walking around the gym, not really doing anything, eating food. They’re first to the lunchroom, never lifting weights.”

By chance, I happened to be reading The Little Book of Talent: 52 Tips for Improving Your Skills by Daniel Coyle, and had just read tips that sounded very familiar to what was mentioned here.

More echoes of Coyle's book in The New Yorker profile:

My presumption was that her excellence was innate. One sometimes thinks of prodigies as embodiments of peculiar genius, uncorrupted by convention, impossible to replicate or reëngineer. But this is not the case with Shiffrin. She’s as stark an example of nurture over nature, of work over talent, as anyone in the world of sports. Her parents committed early on to an incremental process, and clung stubbornly to it. And so Shiffrin became something besides a World Cup hot shot and a quadrennial idol. She became a case study. Most parents, unwittingly or not, present their way of raising kids as the best way, even when the results are mixed, as such results usually are. The Shiffrins are not shy about projecting their example onto the world, but it’s hard to argue with their findings. “The kids with raw athletic talent rarely make it,” Jeff Shiffrin, Mikaela’s father, told me. “What was it Churchill said? Kites fly higher against a headwind.”

So it wasn't a real surprise to finally read this:

The Shiffrins were disciples of the ten-thousand-hours concept; the 2009 Daniel Coyle book “The Talent Code” was scripture. They studied the training methods of the Austrians, Alpine skiing’s priesthood. The Shiffrins wanted to wring as much training as possible out of every minute of the day and every vertical foot of the course. They favored deliberate practice over competition. They considered race days an onerous waste: all the travel, the waiting around, and the emotional stress for two quick runs. They insisted that Shiffrin practice honing her turns even when just skiing from the bottom of the racecourse to the chairlift. Most racers bomb straight down, their nonchalance a badge of honor.

Coyle's book, which I love for its succinct style (it could almost be a tweetstorm if Twitter had slightly longer character limits, each tip is averages one or two paragraphs long), is the books I recommend to all parents who want their kids to be really great at something, and not just sports.

Much of the book is about the importance of practice, and what types of practice are particularly efficient and effective.

Jeff Shiffrin said, “One of the things I learned from the Austrians is: every turn you make, do it right. Don’t get lazy, don’t goof off. Don’t waste any time. If you do, you’ll be retired from racing by the time you get to ten thousand hours.”
“Here’s the thing,” Mikaela told me one day. “You can’t get ten thousand hours of skiing. You spend so much time on the chairlift. My coach did a calculation of how many hours I’ve been on snow. We’d been overestimating. I think we came up with something like eleven total hours of skiing on snow a year. It’s like seven minutes a day. Still, at the age of twenty-two, I’ve probably had more time on snow than most. I always practice, even on the cat tracks or in those interstitial periods. My dad says, ‘Even when you’re just stopping, be sure to do it right, maintaining a good position, with counter-rotational force.’ These are the kinds of things my dad says, and I’m, like, ‘Shut up.’ But if you say it’s seven minutes a day, then consider that thirty seconds that all the others spend just straight-lining from the bottom of the racecourse to the bottom of the lift: I use that part to work on my turns. I’m getting extra minutes. If I don’t, my mom or my coaches will stop me and say something.”

Bill Simmons recently hosted Steve Kerr for a mailbag podcast, and part I is fun to hear Kerr tell stories about Michael Jordan. Like so many greats, Jordan understood that the contest is won in the sweat leading up to the contest, and his legendary competitiveness elevated every practice and scrimmage into gladiatorial combat. As Kerr noted, Jordan single-handedly was a cure for complacency for the Bulls. 

He famously broke down some teammates with such intensity in practice that they were driven from the league entirely (remember Rodney McCray?). Everyone knows he once punched Steve Kerr and left him with a shiner during a heated practice. The Dream Team scrimmage during the lead in to the 1992 Olympics, in which the coaches made Michael Jordan one captain, Magic Johnson the other, is perhaps the single sporting event I most wish had taken place in the age of smartphones and social media.

What struck me about the Shiffrin profiles, something Coyle notes about the greats, is how many of the lives of the great ones are unusually solitary, spent in deliberate practice on their own, apart from teammates. It's obviously amplified for individual sports like tennis and skiing and golf, but even for team sports, the great ones have their own routines. Not only is it lonely at the top, it's often lonely on the way there.

8. The secret tricks hidden inside restaurant menus

Perhaps because I live in the Bay Area, it feels as if the current obsession is with the dark design patterns and effects of social apps. But in the scheme of things, many other fields whose work we interact with daily have many more years of experience designing to human nature. In many ways, people designing social media have a very naive and incomplete view of human nature, but the power of the distribution of ubiquitous smartphone and network effects have elevated them to the forefront of the conversation.

Take a place like Las Vegas. Its entire existence is testament to the fact that the house always wins, yet it could not exist if it could not convince the next sucker to sit down at the table and see the next hand. The decades of research into how best to part a sucker from his wallet makes the volume of research among social media companies look like a joke, even if the latter isn't trivial.

I have a sense that social media companies are similar to where restaurants are with menu design. Every time I sit down at a new restaurant, I love examining the menus and puzzling over all the choices with fellow diners, as if having to sit with me over a meal isn't punishment enough. When the waiter comes and I ask for an overview of the menu, and recommendations, I'm wondering what dishes the entire experience is meant to nudge me to order.

I'm awaiting the advent of digital and eventually holographic or A/R menus to see what experiments we'll see. When will we have menus that are personalized? Based on what you've enjoyed here and other restaurants, we think you'll love this dish. When will we see menus that use algorithmic sorting—these are the most ordered dishes all-time, this week, today? People who ordered this also ordered this? When will see editorial endorsements? "Pete Wells said of this dish in his NYTimes review..."

Not all movies are worth deep study because not all movies are directed with intent. The same applies to menus, but today, enough menus are put through a deliberate design process that it's usually a worthwhile exercise to put them under the magnifying glass. I would love to read some blog that just analyzes various restaurant menus, so if someone starts one, please let me know.

9. Threat of bots and cheating looms as HQ Trivia reaches new popularity heights

When I first checked out HQ Trivia, an iOS live video streaming trivia competition for cash prizes, the number of concurrent viewers playing, displayed on the upper left of the screen, numbered in the hundreds. Now the most popular of games, which occur twice a day, attract over 250K players. In this age where we've seen empires built on exploiting the efficiencies to be gained from shifting so much of social intimacy to asynchronous channels, it's fun to be reminded of the unique fun of synchronous entertainment.

What intrigues me is not how HQ Trivia will make money. The free-to-play game industry is one of the most savvy when it comes to extracting revenue, and even something like podcasts points the way to monetizing popular media with sponsorships, product placement, etc.

What's far more interesting is where the shoulder on the S-curve is. Trivia is a game of skill, and with that comes two longstanding issues. I've answered, at most, 9 questions in a row, and it takes 12 consecutive right answers to win a share of the cash pot. I'm like most people in probably never being able to win any cash.

This is an issue faced by Daily Fantasy Sports, where the word "fantasy" is the most important word. Very soon after they became popular, DFS were overrun by sharks submitting hundreds or thousands of lineups with the aid of computer programs, and some of those sharks worked for the companies themselves. The "fantasy" being sold is that the average person has a chance of winning.

As noted above in my comment about Las Vegas, it's not impossible to sell people on that dream. The most beautiful of cons is one the mark willingly participates in. People participate in negative expected value activities all the time, like the lottery, and carnival games, and often they're aware they'll lose. Some people just participate for the fun of it, and a free-to-play trivia game costs a player nothing other than some time, even if the expected value is close to zero.

A few people have asked me whether that live player count is real, and I'm actually more intrigued by the idea it isn't. Fake it til you make it is one of the most popular refrains of not just Silicon Valley but entrepreneurs everywhere. What if HQ Trivia just posted a phony live player count of 1 million tomorrow? Would their growth accelerate even more than it has recently? What about 10 million? When does the marginal return to every additional player in that count go negative because people feel that there is so much competition it's not worth it? Or is the promise of possibly winning money besides the point? What if the pot scaled commensurate to the number of players; would it become like the lottery? Massive pots but long odds?

The other problem, linked to the element of skill, is cheating. As noted in the article linked above, and in this piece about the spike in Google searches for answers during each of the twice-a-day games, cheating is always a concern in games, especially as the monetary rewards increase. I played the first game when HQ Trivia had a $7,500 cash pot, and the winners each pocketed something like $575 and change. Not a bad payout for something like 10 minutes of fun.

Online poker, daily fantasy sports, all are in constant battle with bots and computer-generated entries. Even sports books at casinos have to wage battle with sharks who try to get around betting caps by sending in all sorts of confederates to put down wagers on their behalf.

I suspect both of these issues will be dampeners on the game's prospects, but more so the issue of skill. I already find myself passing on games when I'm not with others who also play or who I can rope into playing with me. That may be the game's real value, inspiring communal bonding twice a day among people in the same room.

People like to quip that pornography is the tip of the spear when it comes to driving adoption of new technologies, but I'm partial to trivia. It is so elemental and pure a game, with such comically self-explanatory rules, that it is one of the elemental forms or genres of gaming, just like HQ Trivia host Scott Rogowsky is some paragon of a game-show host, mixing just the right balance of cheesiness and snarkiness and effusiveness needed to convince all the players that any additional irony would be unseemly.

10. Raising a teenage daughter

Speaking of Elizabeth Weil, who wrote the Shiffrin profile for Outside, here's another of her pieces, a profile of her daughter Hannah. The twist is that the piece includes annotations by Hannah after the fact.

It is a delight. The form is perfect for revealing the dimensions of their relationship, and that of mothers and teenage daughters everywhere. In the interplay of their words, we sense truer contours of their love, shaped, as they are, by two sets of hands.

[Note, Esquire has long published annotated profiles, you can Google for them, but they are now all locked behind a paywall]

This format makes me question how many more profiles would benefit from allowing the subject of a piece to annotate after the fact. It reveals so much about the limitations of understanding between two people, the unwitting and witting lies at the heart of journalism, and what Janet Malcolm meant, when she wrote, in the classic opening paragraph of her book The Journalist and the Murderer, "Every journalist who is not too stupid or too full of himself to notice what is going on knows that what he does is morally indefensible."

Helpful tip for data series labels in Excel

I've never received as many emails from readers as I did for my most recent post on line graphs, analytics, Amazon, Excel, and Tufte, among other things. It turns out that countless consultants, bankers, and analysts still wake up in a cold sweat at the recollection of spending hours formatting graphs in Excel. I opened every last Excel spreadsheet you sent me, apparently we all have one or another lying around as a souvenir of our shared trauma.

[It's fun to hear directly from my readers, I encourage more of it. When so much of online discourse is random strangers performing drive-by violence, or chucking Molotov cocktails at you, it's somewhat old-fashioned and comforting to receive a friendly email. It reminds me a lot of the early days of the Internet, a more idyllic time, when Utopian dreams of the power of this technology hadn't been crushed by the darkness in the souls of mankind.]

Many readers just shared their stories of Excel frustration, but a few offered helpful tips. One class of these was just a recommendation to try alternative programs for charting, like Tableau, ggplot, or D3. I have not had time to play with any of these, though I remember working with Tableau briefly at a company that had purchased a license. I'm not familiar enough with any of these to offer any meaningful comparison to Excel. 

I still hope that someone who works on Excel will just upgrade their default charting options because of the sheer number of Microsoft Office users. Most companies don't license Tableau, and many may not have the inclination or time to learn to use ggplot and D3, though the latter, in particular, seems capable of generating some beautiful visualizations. In contrast, I can't recall any company I worked at that didn't offer me a Microsoft Office license if I wanted one.

A few readers mentioned they wrote macros to automate some or most of the formatting tricks I set out in my post. Maybe at some point this will become an open source tool, which I'd love. If I spot one I'll share it here. Excel users are a tight knit community.

A couple readers offered a similar solution to my problem with generating dynamic data series labels, but my favorite, the most comprehensive option, came from a reader named Jeffrey, a finance associate.

It's a clever little hack. Here's how it works:

  1. Fill in your data table in Excel as normal, with data series in rows, time periods (or whatever the x-axis will be) in columns.
  2. Add one duplicate row for each data series below those data series in your table, and name each of those the same as the data series above, in order. For example, if your data series are U.S., Japan, and Australia, you now add three more rows, named U.S., Japan, and Australia.
  3. For these dummy data series, leave all the data cells blank except the last one, which in my example was the year 2014. For those three cells, just use a formula so that that cell equals the corresponding last data point from the corresponding data series above. So, for the dummy data series for Australia, my formula for 2014 just points to the value in the cell for 2014 for the actual Australia data series.
  4. Now create the chart for your data table, including both the actual and dummy data series. In the resulting line graph, you now have six data series instead of three, but the three dummy data series just have a single data point, the last one, which overlaps the corresponding last point in the actual data series.
  5. Do all the visual hacks I mentioned in my previous post. When it comes to your three dummy data series, instead of displaying the actual data label, select "Series Name" in the Format Data Labels menu.

That's it! Now you have data labels that will move with any change in the last data point of your three data series.

It's a hack, so it's not perfect. You still have to update your table each time you add to it, moving the dummy pointers one cell to the right. And selecting the data series when it exactly overlaps the actual data series can be tricky. But the thing is, all my old ways of setting up the charts were hacks, too, and this one saves the work of moving data series labels whenever the scale of the graph is changed.

It would be easier if Excel just offered a way to display the data labels and the series name for every data series. Maybe someday? The communication arc of the universe is long, and I like to think it bends towards greater efficiency.

Thanks Jeffrey! Also, thanks to my old coworker Dave who, more than any of my other readers, sends me precise copy-edits to my posts. I wish I were a better editor of my own work, but familiarity breeds myopia.