How to Blow Up a Timeline

NOTE: I’d been working on this piece on and off for a few weeks while trying to move to NYC and settle into my new apartment, and just as I was about to publish it, Elon rate-limited Twitter and so, sensing a moment of weakness, Meta pulled up its launch date for Threads to yesterday. This piece doesn’t cover Threads directly, nor does it talk about the rate-limiting fiasco. It’s focused on why I think Twitter got so much worse over the past year. I thought about holding off and reworking it entirely to incorporate all that happened this week, but in the end I decided that it was cleaner to publish this one as is. If Twitter hadn’t botched so much over this past year, Threads wouldn’t matter. Still, like past pieces I’ve written on topics related to Twitter, you can apply a lot of the ideas in this piece to analyzing Threads’ prospects. And I’ll push a follow-up piece with my specific reactions to and predictions for Threads soon-ish. Follow me on Substack to get a note when that drops.

“I shall be writing about how cities work in real life, because this is the only way to learn what principles of planning and what practices in rebuilding can promote social and economic vitality in cities, and what practices and principles will deaden these attributes.” — Jane Jacobs, The Death and Life of Great American Cities


Today, I come to bury Twitter, not to appraise him.

Oh, who am I kidding, I’m mostly here to appraise how it blew up.

For years, I thought Twitter would persist like a cockroach because:

  • At its core, it’s a niche experience that alienates most but strongly appeals to a few
  • Those few who love Twitter comprise an influential intellectual and cultural cohort, and at internet scale, even niches can be substantial in size

I’ve written before in Status as a Service or The Network’s the Thing about how Twitter hit upon some narrow product-market fit despite itself. It has never seemed to understand why it worked for some people or what it wanted to be, and how those two were related, if at all. But in a twist of fate that is often more of a factor in finding product-market fit than most like to admit, Twitter's indecisiveness protected it from itself. Social alchemy at some scale can be a mysterious thing. When you’re uncertain which knot is securing your body to the face of a mountain, it’s best not to start undoing any of them willy-nilly. Especially if, as I think was the case for Twitter, the knots were tied by someone else (in this case, the users of Twitter themselves).

But Elon Musk is not one to trust someone else’s knots. He’s made his fortune by disregarding other people’s work and rethinking things from first principles. To his credit, he’s worked miracles in categories most entrepreneurs would never dream of tackling, from electric cars to rockets to satellite internet service. There may be only a handful of people who could’ve pulled off Tesla and SpaceX, and maybe only one who could’ve done both. When the game is man versus nature, he’s an obvious choice. When it comes to man versus human nature, on the other hand…

This past year, for the first time, I could see the end of the road for Twitter. Not in an abstract way; I felt its decline. Don’t misunderstand me; Twitter will persist in a deteriorated state, perhaps indefinitely. However, it's already a pale shadow of what it was at its peak. The cool kids are no longer sitting over in bottle service knocking out banger tweets. Instead, the timeline is filled with more and more strangers the bouncer let in to shill their tweetstorms, many of them Twitter Verified accounts who paid the grand fee of $8 a month for the privilege. In the past year, so many random meetings I have with one-time Twitter junkies begin with a long sigh and then a question that is more lamentation than anything else: “How did Twitter get so bad?”

It’s sad, but it’s also a fascinating case study. The internet is still so young that it’s still momentous to see a social network of some scale and lifespan suddenly lose its vitality. The regime change to Elon and his brain trust and the drastic changes they’ve made constitute a natural experiment we don’t see often. Usually, social networks are killed off by something exogenous, usually another, newer social network. Twitter went out and bought Chekhov’s gun in the first act and use it to shoot itself in the foot in the third act. Zuckerberg can now extend his quip about Twitter being a clown car that fell into a gold mine.

In The Rise and Decline of Nations, Mancur Olson builds on his previous book The Logic of Collective Action: Public Goods and the Theory of Groups to discuss how and why groups form. What are the incentives that guide their behavior?

One of his key insights is what I think of as his theory of group inertia. Groups are hard to form in the first place. Think of how many random Discord communities you were invited into the past few years and how many are still active. “Organization for collective action takes a good deal of time to emerge” observes Olson.

However, inertia works both before and after product-market fit. Once a group has formed, it tends to persist even after the collective good it came together to provide is no longer needed.

The same is true of social networks. As anyone who has tried to start one knows, it’s not easy to jump-start a social graph. But if you manage by some miracle to conjure one from the void, and if you provide that group with a reasonable set of ways for everyone to hang out, network effects can keep the party going long after last call. The group inertia that is your enemy before you’ve coalesced a community is your friend after it’s formed. Anyone who’s ever hosted a party and provided booze knows it’s often hard to get the last stragglers to leave. We are a social species.

No social network epitomizes this more than Twitter. It’s not that Twitter was a group of users that assembled for the explicit goal of producing some collective good. Its rise was too emergent to fit into any such directed narrative. But the early years of inertial drag (for years it was literally inert, inertia and inert sharing the same etymological root) followed by later years of inertial momentum fit the broad arc of Olson’s group theory.

I revisit Olson and Twitter’s history because the specifics of how Twitter found product-market fit are critical to understanding its current dissolution. Social networks are path dependent. This is especially true in the West where social networks are largely ad-subsidized and where they’re almost all built around a singular dominant architecture of an infinite scrolling feed optimized for serving ads on a mobile phone. The path each network took to product-market fit selected for a specific user base. As with any community, but especially ones forced to cluster in close proximity in a singular feed, as is common in the West, the people making up the community go a long way towards determining its tenor and values. Its vibes. The composition of its users then determines how conducive that network is to what types of advertising and at what scale. Finally, closing the circle of life, those ad dynamics then influence the network’s middle age evolution as a service. Money may not begin the conversation, that starts with the users, but money gets the final word.

Of all the social networks that achieved some level of scale in this first era of social media, perhaps no other was tried and abandoned by as many users as Twitter. Except for the extremely online community in which I’m deeply embedded (and that I suspect many of my readers are a part of), most normal, well-adjusted humans churned out of Twitter long ago.One of the trickiest things about projecting off of early growth rates for startups in tech is that even fads can generate massive absolute numbers early on if marketed broadly to a global audience. Without looking at early retention and churn rates, you may extrapolate a much larger terminal user base size than will actually stick around. Think about eBay or Groupon, for example. This same caution needs to be applied to Threads; one of the central questions is whether Twitter reached all the people who enjoy microblogging or whether Meta has some magic formula that will allow it to scale to a much larger population. That’s not ideal from a business perspective, but the upside is that those who made it through that great filter selected hard into Twitter’s unique experience. Most sane people don’t enjoy seeing a bunch of random bursts of text from strangers one after the other, but those that do really really love it. And, despite Twitter’s notoriously slow rate of shipping new features over the years, it eventually offered just enough knobs and dials for its users to wrestle their timelines into a fever dream of cacophonous public discourse that hasn’t been replicated elsewhere. More than any other social network, Twitter was one its users seized control of and crafted into something workable for themselves. To its heaviest and most loyal users, it felt at times like a co-op. Recent events remind us it isn’t.

Out of a petri dish that was lifeless for years emerged a culture of creatives, trolls, humorists, politicians, and other public intellectuals screaming at each other in 140 and later 280-character bursts, with even more users quietly gawking from the sideline. This so-called new town square was a 24/7 nightclub for real-world introverts but textual extroverts. My tribe.

This was as entertaining a spectacle as it was shaky a business. Twitter ads have always been hilariously random, and it’s to the credit of the desirable demographics of many of its users that advertisers continued to stick around to have their brands paraded between sometimes questionable, often horrifyingly offensive tweets. But its poor economics as a business shielded it from direct competition. Even if you could recreate its nerdy gladiatorial vibe, why would you? For years it seemed Twitter might persist in this delicate equilibrium, a Galapagos tortoise sunning on an island all to itself, surrounded by ocean as far as the eye could see.

Back to Olson: “Selective incentives make indefinite survival feasible. Thus those organizations for collective action, at least for large groups, that can emerge often take a long time to emerge, but once established they usually survive until there is a social upheaval or some other form of violence or instability.”

Well, “violence and instability” finally came to Twitter in the form of Elon Musk’s ownership. In almost every way, his stewardship has been the polar opposite of the previous regime’s. Politically, to be sure. But more notably, whereas Twitter was previously known as a company that rarely shipped any substantial changes, new Twitter seemed for months to ship things before having thought them through or even QA’ing them. Random bugs seem to pop up in the app all the time, and changes were pushed out and then reversed within the day. Many a day this past year, Twitter has been the main character of the types of drama it used to serve as the forum to discuss.

In classic Twitter fashion, the irony is that it now seems to be in decline not from doing too little but from doing too much. It turns out the way to overcome Olson’s group inertia is to run in swinging a machete, cutting wires, firing people, unplugging computers, flipping switches, tweaking parameters, anything to upset an ecosystem hanging on by a delicate balance. It was, if nothing else, a fascinating natural experiment in how to nudge a network out of longstanding homeostasis.

Given that Musk ended up having to overpay for Twitter by upwards of 4X, thanks to Delaware Chancery Court, it’s not at all surprising he and his new brain trust might choose to take an active hand in trying to salvage as much of his purchase price as possible.

But this heavy-handed top-down management approach runs counter to how Twitter achieved its stable equilibrium. In this way, Musk’s reign at Twitter resembles one of James Scott’s authoritarian high modernist failures. Twitter may have seemed like an underachieving mess before, but its structure, built up piece by piece by users following, unfollowing, liking, muting, and blocking over years and years in a continuous dialogue with the feed algorithm? That structure had a deceptive but delicate stability. Twitter and its users had assembled a complex but functional community, Jane Jacobs style. Every piece of duct tape and every shim put there by a user had a purpose. It may have been Frankensteinian in its construction, but it was our little monster.

This democratic evolution has long been part of Twitter’s history. Many of Twitter’s primary innovations like hashtags, much of its terminology like the word tweets, seemed to come bottom-up from the community of users and developers. This may have capped its scalability; a lot of its syntax has always seemed obtuse (who can forget how you had to put a period before a username if it opened a tweet so that the network wouldn’t treat it as a reply and hide it in the timeline). But, conversely, the service seemed to mold itself around the users who stuck with its peculiar vernacular. After all, they were often the ones who came up with it.

Olson again:

Stable societies with unchanged boundaries tend to accumulate more collusions and organizations for collective action over time.


What established the boundaries of Twitter? Two things primarily. The topology of its graph, and the timeline algorithm. The two are so entwined you could consider them to be a single item. The algorithm determines how the nodes of that graph interact.

The machine learning algorithms have been crucial to scaling our largest social media feeds. They are among the most enormous social institutions in human history, but we don't often think of them that way. It's often remarked upon that Facebook is larger than any country or government, but it should be remarked upon more? I think it's so shocking and horrifying to so many people that they prefer to block it out of their mind. In a literal sense, Twitter has always just been whose tweets show up in your timeline and in what order.

In the modern world, machine learning algorithms that mediate who interacts with whom and how in social media feeds are, in essence, social institutions. When you change those algorithms you might as well be reconfiguring a city around a user while they sleep. And so, if you were to take control of such a community, with years of information accumulated inside its black box of an algorithm, the one thing you might recommend is not punching a hole in the side of that black box and inserting a grenade.

So of course that seems to have been what the new management team did. By pushing everyone towards paid subscriptions and kneecapping distribution for accounts who don’t pay, by switching a TikTok style algorithm, new Twitter has redrawn the once stable “borders” of Twitter’s communities.

This new pay-to-play scheme may not have altered the lattice of the Twitter graph, but it has changed how the graph is interpreted. There’s little difference. My For You feed shows me less from people I follow, so my effective Twitter graph is diverging further and further from my literal graph. Each of us sits at the center of our Twitter graph like a spider in its web built out of follows and likes, with some empty space made of blocks and mutes. We can sense when the algorithm changes. Something changed. The web feels deadened.

I’ve never cared much about the presence or not of a blue check by a user’s name, but I do notice when tweets from people I follow make up a smaller and smaller percentage of my feed. It’s as if neighbors of years moved out from my block overnight, replaced by strangers who all came knocking on my front door carrying not a casserole but a tweetstorm about how to tune my ChatGPT and MidJourney prompts.

I tried switching to the Following from the For You feed, but it seems the Following feed is strictly reverse chronological. This is a serious regression to the early days of Twitter when you had to check your feed frequently to hope to catch a good tweet from any single person you followed. We tried this before; it was terrible then, it’s terrible now.

This weakening of the follow works in the other direction, too. Many people who follow me tell me they don’t see as many of my tweets as they used to. All my followers are accumulated social capital that seem to have been rendered near worthless by algorithmic deflation.

With every social network, one of the most important questions is how much information the structure of the graph itself contains. Because Twitter allows one-way following, its graph has always skewed towards expressing at least something about the interests of its users. Unlike on Facebook, I didn’t blindly follow people I knew on Twitter. The Twitter graph, more than most, is an interest graph assembled from a bunch of social graphs standing on each other’s shoulders wearing an interest graph costume. Not perfect, but not nothing.

The new Twitter algorithm tossed that out.

If you’re going to devalue the Twitter graph’s core primitive, the act of following someone, you’d better replace it with something great. The name of the new algorithmic feed hints at what they tried: For You. It’s nomenclature borrowed from TikTok, the entertainment sensation of the past few years.

I’ve written tens of thousands of words on TikTok in recent years (my three essays on TikTok are here, here, and here), and I won’t rehash it all here. What prompted my fascination with the app was that it attacked the Western social media incumbents at an oblique angle. In TikTok and the Sorting Hat, I wrote:

The idea of using a social graph to build out an interest-based network has always been a sort of approximation, a hack. You follow some people in an app, and it serves you some subset of the content from those people under the assumption that you’ll find much of what they post of interest to you. It worked in college for Facebook because a bunch of hormonal college students are really interested in each other. It worked in Twitter, eventually, though it took a while. Twitter's unidirectional follow graph allowed people to pick and choose who to follow with more flexibility than Facebook's initial bi-directional friend model, but Twitter didn't provide enough feedback mechanisms early on to help train its users on what to tweet. The early days were filled with a lot of status updates of the variety people cite when criticizing social media: "nobody cares what you ate for lunch."

But what if there was a way to build an interest graph for you without you having to follow anyone? What if you could skip the long and painstaking intermediate step of assembling a social graph and just jump directly to the interest graph? And what if that could be done really quickly and cheaply at scale, across millions of users? And what if the algorithm that pulled this off could also adjust to your evolving tastes in near real-time, without you having to actively tune it?

The problem with approximating an interest graph with a social graph is that social graphs have negative network effects that kick in at scale. Take a social network like Twitter: the one-way follow graph structure is well-suited to interest graph construction, but the problem is that you’re rarely interested in everything from any single person you follow. You may enjoy Gruber’s thoughts on Apple but not his Yankees tweets. Or my tweets on tech but not on film. And so on. You can try to use Twitter Lists, or mute or block certain people or topics, but it’s all a big hassle that few have the energy or will to tackle.


This is more commonly accepted now, but back in 2020 when I wrote this piece, TikTok’s success was still viewed with a lot of skepticism and puzzlement. Since then, we’ve seen Instagram and Twitter both try emulating TikTok’s strategy. Both Instagram and Twitter now serve much less content from people you follow and more posts selected by machine learning algorithms trying to guess your interests.

Instagram has been more successful in part because it has formats like Stories that keep content from one’s follows prominent in the interface. There’s social capital of value embodied in the follow graph, and arguably it’s easier for Instagram to preserve much of that while copying TikTok than it is for TikTok to build a social graph like Instagram.

But that’s a topic for another day. Twitter is the app on trial today. And of all Twitter’s recent missteps, I think this was the most serious unforced error. For a variety of design reasons, Twitter will likely never be as accurate an interest graph as, say, TikTok is an entertainment network.

As I’ve written about before in Seeing Like An Algorithm, Twitter’s interface doesn’t capture sentiment, both positive and negative, as cleanly, as TikTok.

Let’s start with positive sentiment. On this front, Twitter is…fine? It’s not for lack of usage. I’ve used Twitter a ton over more than a decade now, I’ve followed and unfollowed thousands of accounts, liked even more tweets, and posted plenty of tweets and links. I suspect one issue is that many tweets don’t contain enough context to be accurately classified automatically. How would you classify a tweet by Dril?

But perhaps even more damning for Twitter is its inability to see negative sentiment. Allowing users to pay for better tweet distribution leaves the network vulnerable to adverse selection. That’s why the ability to capture negative sentiment, especially passive negative sentiment, is so important to preserving a floor of quality for the Timeline.

Unfortunately, capturing that passive disapproval is something Twitter has never done well. In Seeing Like an Algorithm, I wrote about how critical it was for a service’s design to help machine learning algorithms “see” the necessary feedback from users, both positive and negative. That essay’s title was inspired by Scott’s Seeing Like a State which described how high modernist governments depended on systems of imposed legibility for a particular authoritarian style of governance.

Modern social networks lean heavily on machine learning algorithms to achieve sufficient signal-to-noise in feeds. To manually manage complex adaptive systems at the scale of modern social media networks would be impossible otherwise. One of the critiques of authoritarian technocracies is that they quickly lose touch with the people they rule over. It's no surprise that such governments have also looked at machine learning algorithms paired with the surveillance breadth of the internet as a potential silver bullet to allow them to scale their governance. The two entities that most epitomize each of these both come out of China: Bytedance and the CCP. The latter, in particular, has long been obsessed with cybernetics, despite having followed it down a disastrous policy rabbit hole before.

But these cybernetic systems, in the Norbert Wiener sense, only work well if their algorithms see enough user sentiment and see it accurately. Just as Scott felt high modernism failed again and again because those systems overly simplified complex realities, Twitter’s algorithm operates with serious blind spots. Since every output is an input in a cybernetic system, failure to capture all necessary inputs leads to noise in the timeline.

Twitter doesn’t see a lot of passive negative sentiment; it’s a structural blind spot. In a continuous scrolling interface with multiple tweets on screen at any one time, it’s hard to tell disapproval from apathy or even mild approval because the user will just scroll past a tweet for any number of reasons.

This leads to a For You page that feels like it’s missing my friends and awkwardly misinterpreting my interests. Would you like yet another tweetstorm on AI and how it can change your life? No, well too bad, have another. And another. For someone who claims to be worried about the dangers of AI, Elon’s new platform sure seems to be pushing us to play with it.

In the rush to copy TikTok, many Western social networks have misread how easy it is to apply lessons of a very particular short video experience to social feeds built around other formats. If you’re Instagram Reels and your format and interface are a near carbon copy, then sure, applying the lessons of my three TikTok essays is straightforward. But if you’re Twitter, a continuous scrolling feed of short textual content, you’re dealing with a different beast entirely.

Even TikTok sometimes seems to misunderstand that its strength is its purity of function as an interest/entertainment graph. Its attempts to graft a social graph onto that have struggled because social networking is a different problem space entirely. Pushing me to follow my friends on TikTok muddies what is otherwise a very clear product proposition. Social networking is a complex global maximum to solve for. In contrast, entertaining millions of people with an individual channel personalized to each of them is an agglomeration of millions of local maximums. TikTok’s interface paired with ByteDance’s machine learning algorithms are perfect for solving the latter but much less well-suited towards social networking.

Here’s another way to think about it. The difference between Twitter and an algorithmic entertainment network like TikTok is that you could fairly quickly reconstitute TikTok even without its current graph because its graph is a much less critical input to its algorithm than the user reactions to any random sequence of videos they’re served.

If Twitter had to start over without its graph, on the other hand, it would be dead (which speaks to why Twitter clones like BlueSky which are just Twitter minus the graph and with the same clunky onboarding process seem destined for failure). The new For You feed gives us a partial taste of what that might look like, and it's not pretty.

I ran a report recently on all the accounts I follow on Twitter. I hadn’t realized how many of them had been dormant for months now. Many were people whose tweets used to draw me to the timeline regularly. I hesitate to unfollow them; perhaps they’ll return? But I’m fooling myself. They won’t. Inertia again. A user at rest tends to stay at rest, and a user that flees tends to be gone for good.

Even worse, many accounts I follow look to have continued to tweet regularly over the past year. I just don’t see their tweets anymore. The changes to the Twitter algorithm bulldozed over a decade’s worth of Chesterton fences in a few months.


The other prominent mistake of the Elon era is more commonly cited, and I tend to think it’s overrated, but it certainly didn’t help. It’s the type of mistake only a prominent and polarizing figure running a social network could stumble into: his own participation on the platform he owns. The temptation is understandable. If you overpaid for a social network by tens of billions of dollars, why shouldn’t you be able to use it as you please? Why not boost your own tweets and use it as a personal megaphone? Why buy a McLaren if you take it for a spin and total it? He declared that one of his reasons for purchasing Twitter was to restore it to being a free speech platform, so why not speak his mind?

More than any tech CEO, he’s become a purity test for one’s technological optimism. His acolytes will follow him, perhaps even literally, to Mars, while his critics consider him the epitome of amoral Silicon Valley hubris. That he is discussed in such simplistic, binary terms is ironic; it exemplifies the nature of discourse on Twitter. It’s no surprise that many Twitter alternatives market themselves simply as Twitter minus Elon (though I suspect most people just want, like me, a Twitter with the same graph but minus the new For You algorithm).

But there’s a Heisenberg Uncertainty Principle of social in play here. Every tweet of his alters the fabric of Twitter so drastically that it’s almost impossible for some users to coexist on Twitter alongside him. He singlehandedly brought some users back to Twitter and sent others fleeing for the exits. There are no neutral platforms, as many have noted, but Musk’s gravitational field has warped Twitter’s entire conversational orbit and brand trajectory. Leaving Twitter, or simply refusing to pay for verification, is now treated as an act of resistance. It’s debatable whether that’s fair, but reality doesn’t give a damn.

Some users might have stuck around had Musk used his Twitter account solely for business pronouncements, but that wouldn’t be any fun now would it? He’s always enjoyed trolling his most vocal critics on Twitter, but it hits different when he’s the owner of said platform used by millions of cultural elites the world over.

Earlier this year, it appeared that Musk had comped Twitter Verified blue checkmarks to prominent public figures like Stephen King, some of whom had repeatedly criticized him. This led to the absurd and prolonged spectacle of dozens of famous people asserting over and over that they had absolutely not paid the meager sum of $8 a month for the scarlet, err, baby blue checkmark that now adorned their profiles, not to be confused with the blue checkmark that formerly appeared in the same spot that they hadn’t paid for. This made the blue checkmark a sort of Veblen good; more people seemed to want one when you couldn’t buy one, when it was literally priceless.The price is an odd one. $8 a month is not expensive enough to be a wealth signal, but it’s enough to feel like an insult to users who feel like they subsidized the popularity of Twitter over the years with their pro bono wit. I believe it was Groucho Marx who once said something to the effect of not wanting to belong to any club that would accept him as a member for the tidy sum of $8 a month.

This culminated in one weekend when Musk engaged in a protracted back and forth with Twitter celebrity shitposter Dril, pinning a Twitter Blue badge on his profile over and over only to have Dril remove it by changing his profile description. This went on for hours, and some of us followed along, like kids on the playground watching a schoolboy chase a girl holding a frog. This was bad juju and everyone knew it.


I’ll miss old Twitter. Even now, in its diminished state, there isn’t any real substitute for the experience of Twitter at its peak. Compared to its larger peers in the social media space, Twitter always reminded me of Philip Seymour Hoffman’s late-night speech as Lester Bangs in Almost Famous, delivered over the phone to the Cameron Crowe stand-in William Miller, warning him about having gotten seduced by Stillwater, the band Miller was profiling for The Rolling Stone:

Oh man, you made friends with them. See, friendship is the booze they feed you. They want you to get drunk on feeling like you belong. Because they make you feel cool, and hey, I met you. You are not cool. We are uncool. Women will always be a problem for guys like us, most of the great art in the world is about that very problem. Good-looking people they got no spine, their art never lasts. They get the girls but we’re smarter. Great art is about guilt and longing. Love disguised as sex and sex disguised as love. Let’s face it, you got a big head start. I’m always home, I’m uncool.

The only true currency in this bankrupt world is what you share with someone else when you’re uncool. My advice to you: I know you think these guys are your friends. You want to be a true friend to them? Be honest and unmerciful.

In the world of Almost Famous, Instagram would be the social network for the Stillwaters, the Russell Hammonds, the Penny Lanes. Beautiful people, cool people. Twitter was for the uncool, the geeks, the wonks, the wits, the misfits. Twitter was honest and unmerciful, sometimes cruelly so, but at its best it felt like a true friend.

It was striking how many of Elon’s early tweets about Twitter’s issues seemed to pin Twitter’s underperformance on engineering problems. Response times, things of that nature. But Twitter’s appeal was never a pure feat of engineering, nor were its problems solely the fault of engineering malpractice. They were human in nature. Twitter isn’t, as many have noted, rocket science, making it a particularly tricky domain for a CEO of, among other things, a rocket company. Ironically, Norbert Wiener, often credited as the father of cybernetics, a field which has lots of relevance to analyzing social networks, worked on anti-aircraft weapons during World War II. So if you really want to nitpick, your vast conspiracy board might somehow connect running a social network to rocket science. You can test unmanned rockets, and if they blow up on take-off or re-entry, you’ve learned something, no harm done. But running the same test on a social media service is like testing rockets with your users as passengers. Crash a rocket and those users aren’t going to be around for the next test flight.

It’s not clear there will ever be a Twitter replacement. If there is one, it won’t be the same. It may look the same, but it will be something else. The internet is different now, and the conditions that allowed Twitter to emerge in the first place no longer exist. The Twitter diaspora has scattered to all sorts of subscale clones or alternatives, with no signs of agreeing on where to settle. As noted social analyst Taylor Swift said, “We are never ever getting back together.”

For this reason, Twitter won’t ever fully vanish unless management pulls the plug. None of the contenders to replace Twitter has come close to replicating its vibe of professional and amateur intellectuals and jesters engaged in verbal jousting in a public global tavern, even as most have lifted its interface almost verbatim. Social networks aren’t just the interface, or the algorithm, they’re also about the people in them. When I wrote “The Network’s the Thing” I meant it; the graph is inextricable from the identity of a social media service. Change the inputs of such a system and you change the system itself.

Thus Twitter will drift along, some portion of its remaining users hanging out of misguided hope, others bending the knee to the whims of the new algorithm.

But peak Twitter? That’s an artifact of history now. That golden era of Twitter will always be this collective hallucination we look back on with increasing nostalgia, like alumni of some cult. With the benefit of time, we’ll appreciate how unique it was while forgetting its most toxic dynamics. Twitter was the closest we’ve come to bottling oral culture in written form.

Media theorist Harold Innis distinguished between time-biased and space-biased media:

The concepts of time and space reflect the significance of media to civilization. Media that emphasize time are those durable in character such as parchment, clay and stone. The heavy materials are suited to the development of architecture and sculpture. Media that emphasize space are apt to be less durable and light in character such as papyrus and paper. The latter are suited to wide areas in administration and trade. The conquest of Egypt by Rome gave access to supplies of papyrus, which became the basis of a large administrative empire. Materials that emphasize time favour decentralization and hierarchical types of institutions, while those that emphasize space favour centralization and systems of government less hierarchical in character.

Twitter always intrigued me because it has elements of both. It always felt like it compressed space—the timeline felt like a single lunch room hosting a series of conversations we were all participating in or eavesdropping on—and time—every tweet seemed to be uttered to us in the moment, and so much of it was about things occurring in the world at that moment (one of the challenges of machine learning applied to news and Tweets both is how much of it has such a short half-life versus the more evergreen nature of TikToks, YouTube videos, movies, and music. A lot of Twitter was textual, but the character limit and the ease of replying lent much of it an oral texture. It felt like a live, singular conversation.

When reviewing a draft of this piece, my friend Tianyu wrote the following comment, which I’ll just cite verbatim, it’s so good:

Twitter feels like a perfect example of what James W. Carey calls the "ritual view of communication" (see Communication as Culture). Its virality doesn't come from transmission alone, but rather the quasi-religiosity of it; scrolling Twitter while sitting on the toilet is like attending a mass every Sunday morning. Like religions, Twitter formulates participatory rituals that come with a public culture of commonality and communitarianism. These rituals are then taken for granted—they become how people on the internet consume information and interact with one another by default.

Religious rituals rise and fall. Today all major religions have, at some point, become a global mimesis through missionary work, political power, and imperial expansions. Musk's regime is basically saying, 'oh well, Christianity isn't expanding fast enough. What we need to do is to rewrite the Bible and abolish the clergy. That'll do the work.'

Carey often notes that communication shares the same roots as words like common, community, and communion. Combine the ritualistic nature of Twitter with its sense of compressing space and time and you understand why its experience was such a convincing illusion of a single global conversation. I suspect Carey would argue that the simulacrum of such a conversation effectively created and maintained a community.

Even the vocabulary used to describe Twitter reinforced its ritualistic nature. Who would be today’s main character, we’d ask, as if that day’s Twitter drama was a single narrative we were all reading. We’d go to see the list of Trending Topics for the day as if looking to see who was being tarred and feathered in the Twitter town square that day. There was always a mob to join if you wanted to cast a stone, or a meme template of the day to borrow.

Friends would forward me tweets, and at some point I stopped replying “Oh yeah I saw that one already” because we had all seen all of them already. Twitter was small, but more importantly, it felt small. Users often write about how Twitter felt worse once they exceeded some number of followers, and while there are obvious structural reasons why mass distribution can be unpleasant, one underrated drawback of a mass following was the loss of that sense of speaking to a group of people you mostly knew, if not personally, then through their tweets.

In a way, Twitter’s core problem is so different than that of something like TikTok, which, as I noted earlier, is a challenge of creating a local maximum for each user. Twitter at its best felt, like Tianyu described it to me, a global optimum. In reality, it’s never so binary. Even in a world of deep personalization, we want shared entertainment and grand myths, and vice versa. TikTok has its globally popular trends and Twitter its micro-communities. But a TikTok-like algorithm was always going to be particularly susceptible to ruining the cozy, communal feel of a scaled niche like Twitter.

I’ve met more friends in the internet era through Twitter than any other social media app. Some of my closest friends today first entered my life by sliding into my DM’s, and it saddens me to see the place emptying out.

All of this past year, as a slow but steady flow of Twitter’s more interesting users has made their way to the exits, unwilling to fight to be heard anymore, or just stopped tweeting, I’ve still opened the app daily out of habit, and to research for pieces like this. But the vibes are all off. I haven’t churned yet, but at the very least, I’ve asked the bartender to close out my tab.

If Twitter’s journey epitomizes the sentimental truism that the real treasure was the friends we made along the way, then the story of its demise will begin the moment we could no longer find those friends on that darkened timeline.


ACKNOWLEDGMENTS: Thanks to my friends Li and Tianyu for reading drafts of this piece at various stages and offering such rapid feedback. Considering the length of my pieces, that's no small thing. Their encouragement and useful notes and questions helped me refine and clarify my thinking. Also, if it wasn’t for Twitter, I probably wouldn’t know either of them today.

Inspiration for the title of this post comes from this which is based on this.

As my own Twitter usage fades, I plan to ramp back up writing on my website. If you're interested in keeping up, follow my Substack which I plan to spin back up to keep folks updated on my latest writing and where I’ll drop, among other things, a follow-up to this piece with my thoughts on Threads.

And You Will Know Us by the Company We Keep

It feels as if we're at the tail end of the first era of social media in the West. Looking back at the companies that have survived, certain application architectural choices are ubiquitous. By now, we're all familiar with the infinite vertical scrolling feed of content units, the likes, the follows, the comments, the profile photos and usernames, all those signature design tropes of this Palaeozoic era of social.

But just as there are reasons why these design patterns won out, we shouldn't let survivor bias blind us to their inherent tradeoffs. The next wave of social startups should learn from the weaknesses of some of these choices of our current social incumbentsIt's easy to point out where our incumbent social networks went wrong. Of course, to be where they are today, they had to do a hell of a lot right, too. A lot of mistakes are understandable in hindsight given that online networks of this scale hadn't been built in history before. Still, it's easier to learn from where they went wrong if we're to head towards greener pastures.. It's never smart to tackle powerful incumbents head on anyway. The converged surface area in the design of all these apps suggest oblique vectors of attack.

While many of these flaws have already been pointed out and discussed in various places, one critical design mistake keeps rearing its head in many of the social media Testflights sent my way. I've mentioned it in various passing conversations online before. I refer to this as the problem of graph design:

When designing an app that shapes its user experience off of a social graph, how do you ensure the user ends up with the optimal graph to get the most value out of your product/service?


The fundamental attribution error has always been one of my cautionary mental models. The social media version of this is over-attributing how people behave on a social app to their innate nature and under-attributing it to the social context the app places them in. Perhaps the single most important contextual influence in social media is one's social graph. Who they follow and who follows them.

Just as some sharks that stop moving dieSome sharks rely on ram ventilation must swim in order to push water over their gills to breathe. But many shark species do not. Maybe we should refer to social apps that rely on a graph to work as "graph ventilated.", most Western social media apps must build a graph or die. This is because most of the most well-known Western social apps chose to interlace two things: the social graph and the content feed. That is, the most social media apps serve up an infinite vertical scrolling feed populated by content posted by the accounts the user follows. In my essay series on TikTok (in order, they are TikTok and the Sorting Hat, Seeing Like an Algorithm, and American Idle), I refer to this as approximating an interest graph using a social graph.

You can see this time-tested design, for example, in Facebook, Twitter, and Instagram. It is particularly suited to mobile phones, which dominate internet usage today, and which offer a vertical viewport when held in portrait orientation, as they most often are.

We'll return, in a second, to whether this choice makes sense. For now, just note that this architecture behooves these apps to prioritize scaling of the social graph. It's imperative to get users to follow people from the jump. Otherwise, by definition, their feeds will be empty.

This is the classic social media chicken-and-egg cold start problem. Every Silicon Valley PM has likely heard the stories about how Twitter and Facebook's critical keystone metrics were similar: get a user to follow some minimum number of accounts. Achieve that and those users turn into WAUs, or even better, DAUs. Users failing to follow enough accounts were the most likely to churn. Many legendary growth teams built their entire reputations inducing tens or hundreds of millions users to follow as many other users as possible.

But, again, this obligation derives entirely from the choice to build the feed directly off of the social graph. In TikTok and the Sorting Hat, I wrote:

But what if there was a way to build an interest graph for you without you having to follow anyone? What if you could skip the long and painstaking intermediate step of assembling a social graph and just jump directly to the interest graph? And what if that could be done really quickly and cheaply at scale, across millions of users? And what if the algorithm that pulled this off could also adjust to your evolving tastes in near real-time, without you having to actively tune it?


The problem with approximating an interest graph with a social graph is that social graphs have negative network effects that kick in at scale. Take a social network like Twitter: the one-way follow graph structure is well-suited to interest graph construction, but the problem is that you’re rarely interested in everything from any single person you follow. You may enjoy Gruber’s thoughts on Apple but not his Yankees tweets. Or my tweets on tech but not on film. And so on. You can try to use Twitter Lists, or mute or block certain people or topics, but it’s all a big hassle that few have the energy or will to tackle.


Almost all feeds end up vying with each other in the zero sum attention landscape, and as such, they all end up getting pulled into competing on the same axis of interest or entertainment. Head of Instagram Adam Mosseri recently announced a series of priorities for the app in the coming year, one of them being an increased focus on video. “People are looking to Instagram to be entertained, there’s stiff competition and there’s more to do,” Mosseri said. “We have to embrace that, and that means change.”

In my post Status as a Service, I noted that social networks tend to compete on three axes: social capital, entertainment, and utility. Focusing just on entertainment, the problem with building a content feed off of a person's social graph is that, to be blunt, we don't always find the people we know to be that entertaining. I love my friends and family. That doesn't mean I want to see them dancing the nae nae. Or vice versaEDITOR'S NOTE: It's not just people who know him. No one wants to see Eugene dance the nae nae.. Who we follow has a disproportionate effect on the relevance and quality of what we see on much of Western social media because the apps were designed that way.

At the same time, who follows us may be just as consequential. We tend to neglect that in our discussions of social experiences, perhaps because it's a decision over which users have even less control than who they choose to follow. Yet it shouldn't come as a surprise that what we are willing to post on social media depends a lot on who we believe might see it. Our followers are our implied audience.

To take the most famous example, the root of Facebook's churn issues began when their graph burgeoned to encompass everyone in one's life. As noted above, just because we are friends with someone doesn't mean we want to see everything they post about in our News Feed. In the other direction, having many more people from all spheres of our lives follow us created a massive context collapse. It wasn't just that everyone and their mother had joined Facebook, it was specifically that everyone's mother had joined Facebook.There's some generalizable form of Groucho's Marx quip about not refusing to join any club that would have him as a member. Namely, that most people don't want to belong to a club where they're the highest status member. Because, by definition, the median status of a member of the club is lowering their own. That's not to say it can't be a stable configuration. Networks based more around utility, like WeChat, aren't driven as much by status dynamics. Not surprisingly, they are less focused on a singular feed.

It's difficult, when you're starting out on a social network, to imagine that having more followers could be a bad thing. Yet many Twitter users complain after they surpass 20K, then 50K, then 100K followers or more. Suddenly, a lot of your hot takes attract equally hot pushback. Suddenly, it isn't so fun yeeting your ideas out into the ether. I know. Boo hoo on the smallest violin. But regardless of whether you think this is a first world problem, it's indicative of how phase shifts in the experience of social media are difficult to detect until long after they've occurred.

To put it even stronger, graph design problems are particularly dangerous to social companies because they fall into that class of mistakes that are difficult to reverse. Jeff Bezos wrote, in his 1997 Amazon letters to shareholders, about two types of decisions.

Some decisions are consequential and irreversible or nearly irreversible – one-way doors – and these decisions must be made methodically, carefully, slowly, with great deliberation and consultation. If you walk through and don’t like what you see on the other side, you can’t get back to where you were before. We can call these Type 1 decisions. But most decisions aren’t like that – they are changeable, reversible – they’re two-way doors. If you’ve made a suboptimal Type 2 decision, you don’t have to live with the consequences for that long. You can reopen the door and go back through. Type 2 decisions can and should be made quickly by high judgment individuals or small groups.


Graph design problems are one-way mistakes in large part because users make them so. Most social media users don't unfollow people after following them. Much of this comes down to social conformity. It's awkward and uncomfortable to do so, especially if you'll run into them. Anytime I unfollow someone I might run into, I imagine them cornering me like Larry David at the water cooler, eyebrows raised, with that signature tone of voice he mastered on Curb Your Enthusiasm, an equal mix of indignation at being slighted and glee at having caught you in an act of hypocrisy. "So, Eugene, I notice you unfollowed me. Pret-tay, pret-tay interesting."

If people tend to add to their social graphs more than they prune them, the social graph you help your users design should be treated as a one-way decision. And as Bezos noted, one-way decisions should be treated with care.Once Twitter started posting tweets to my timeline simply because people I followed had liked them, even if they were tweets from people I didn't follow myself, I started getting very confused. If you're angry I don't follow you, it may be that I think I already follow you.

Many social apps, because of how they're configured, undergo phase shifts as the graph scales. The user experience at the start, when you have few friends and followers, changes as those figures rise. At first, it's more lively with more people. Now the party's getting started. But beyond some scale, negative network effects creep in. And if you don't change how you handle it, before you know it, you find yourself pronouncing that you're taking a break from social media for your mental health.

Not only do users not notice it happening, like the proverbial slow boiling frog, the people operating the apps may be oblivious to the phase shifts until it's too late. Social graphs are path dependent.

A classic example, though I don't know if this still persists, is how Pinterest skewed heavily towards female users at launch, losing lots of potential male users in the process. This was a function of building their feed off of each user's social graph. Men would see a flood of pins from the females in their network as women were some of the strongest earlier adopters of pinning. This created a reflexive loop in which Pinterest was perceived as a female-centric social app, which chased off some male users, thus becoming self-fulfilling stereotype. An alternate content selection heuristic for the feed could have corrected for this skew.

But again, this is a problem unique to Western social media design. In conflating the social graph and the interest graph, we've introduced a content matching problem that needn't exist. I don't get upset that my friends don't follow me on TikTok or Reddit or what I think of as purer interest and/or entertainment networks. It's very clear in those products that each person should follow their own interests.

The way China has built out its social infrastructure is, in at least this respect, more logical. WeChat owns the dominant social graph, and it acts as an underlying social infrastructure to the rest of the Chinese internetThough not always a reliable one. If you're a WeChat competitor in any category, they may block links to your apps, as they've done with Douyin and Taobao in the past. This is always the danger of a private company owning the dominant social graph, and where regulators need to step in.. Rather than duplicate the social graph of everyone, which WeChat owns, other apps can focus on what they do best, which might or might not require an alternate graph.

Western social apps also rely much more heavily on advertising revenue. The lifeblood of their income statement is traffic to the feed. This means feed relevance is paramount. Anywhere one's social graph drifts from one's interests, boring content invades the feed. The signal to noise ratio shifts the wrong direction. Instead of pruning and tuning their social graphs to fix their feeds, most users do the next easiest thing: they churn.

As a product manager or designer on social app, you might object. The user chooses who to follow, and other users choose to follow them. It's out of your control. But this ignores all the ways in which apps put their hands on the scale to nudge each user towards a specific type of graph.

Take initial user sign-up flows. Every week, I seem to encounter this modal dialog on a new social app Testflight:

ios-contact-access-permission.png

What I look for is where this request appears and how the app frames it. Most times, users are asked to grant access to their contact book and to follow any matching users (or even worse, to spam their contact list with invites) before they have any idea of what the app is even about. In pushing people to duplicate their contact book, these apps are explicitly choosing to build off of people's real-world social graphs.

It's not surprising that social apps prioritize this permission as a critical one in the sign-up flow. The iOS contact book is now the only "open-source" social graph that a new app can work from to jumpstart their own. In The Network's the Thing, I argued that the network itself provides the lion's share of the value for a social network, that arguments about what types of content to allow in feeds, how those were formatted, were of much less importance. For a brief window, massive social graphs like Facebook or Twitter allowed third-party apps to tap into those graphs, even to duplicate them wholesale.

Instagram famously got a nice head start on building out their own social graph by siphoning off of Twitter's. It didn't take long for those companies to realize that they were arming their future competition. They clamped down on graph access hard. You can still offer Facebook or Twitter auth as an option for your app, but if you want a social graph of your own, the mobile contact book is the easiest to tap into nowadays.

Another way apps really influence the shape of their social graph is with suggested follow lists. These often appear in the first-time user walkthrough, interspersed in the feed, and sometimes alongside the feed.

Early Twitter users fortunate enough to be on the first versions of Twitter's suggested follow list today have hundreds of thousands or even millions of followers because they were paraded in front of every new user.

It was a massive social capital subsidy, but I find a lot of selections on that list puzzling. A few years ago, a friend set up a Twitter account for the first time and showed me the list of accounts Twitter suggested to them during sign up. It included Donald Trump. Which, regardless of your political leanings, is a dubious choice. Let's just shove every new user in the direction of politics Twitter (I'd be skeptical of a suggestion of Biden, too), one of the worst Twitters there is. Cool, cool.

For some people, like those who frequent fight clubs on weekends, politics Twitter might be the perfect dopamine fix, but when a user is signing up for the first time and Twitter knows nothing about them, that's a bizarre gamble to take.

For years, people marveled at Facebook's Suggested Friends widget. Wow, how did they know that I knew that person, yes, of course I'll friend them. And yet, as noted earlier, that may have been a graph design mistake given the way the News Feed was being constructed.

In the other direction, it's also important to help a users acquire the right types of followers. Cults are held together by a bi-directional influence. Cult leaders use their charisma to grow a following, then those followers shape the cult leader in return. It's a symbiotic feedback loop, not always a healthy one.

Besides being one-way mistakes, graph design errors are also pernicious because the tend to manifest only after an app has achieved some level of product-market fit. By that point, not only is it difficult to undo the social graph that has crystallized, to do so would violate the expectations of the users who've embraced the app as it is. It's a double bind, you're damned if you do and damned if you don't. Apps that achieve some level of product-market fit, even if it's a local maximum, require real courage to revert.

This doesn't stop social apps from trying to fix the problem. Reduced traffic to the feed is existential for many social apps. Instead of fixing the root problem of the graph design, however, most apps opt instead to patch the problem. The most popular method is to switch to an algorithmic, rather than chronological, feed. The algorithm is tasked with filtering the content from the accounts you've chosen to follow. It tries to restore signal over noise. To determine what to keep and what to toss, feed algorithms look at a variety of signals, but at a basic level they are all trying to guess what will engage you.

Still, this is a band-aid on an upstream error. Look at Facebook oscillating every few years between news content and more personal content from people you know. Until they acknowledge that the root problem lies in sourcing stories for News Feed from their monolithic social graph, they'll never truly solve their churn. And yet, to walk away from this fundamental architecture of their News Feed would be the boldest decision they've made in their long historyIronically, shifting to the News Feed itself was perhaps their previous boldest decision.. Not just because almost all their revenue comes from News Feed as it works now, but also since assembling a monolithic graph might be their strongest architectural defense against government antitrust action.

Twitter, unlike Facebook with its predominant two-way friending, is built on a graph assembled from one-way follows. In theory, this should reduce its exposure to graph design problems. However, it suffers from the same flaw that any interest graph has when built on a social graph. You may be interested in some of a person's interests but not their others. Twitter favors pure play Twitter accounts that focus on one niche. But most people don't opt to operate multiple Twitter accounts to cleanly separate the topics they like to tweet about.

One of my favorite heuristics for spotting flaws in a system is to look at those trying to break it. Advanced social media users have long tried to hack their away around graph design problems. Users who create finsta's or alt Twitter accounts are doing so, in part, to create alternative graphs more suited to particular purposes. One can imagine alternative social architectures that wouldn't require users to create multiple accounts to implement these tactics. But in this world where each social media account can only be associated with one identity, users are locked into a single graph per account.

One clever way an app might help solve the graph design problem is by removing the burden of unfollowing accounts that no longer interest users. Just as our social graphs change throughout our lives, so could our online social graphs. Our set of friends in kindergarten tend not to be the same friends we have in grade school, high school, college, and beyond.

A higher fidelity social product would automatically nip and tuck our social graphs over time as they observed our interaction patterns. Imagine Twitter or Instagram just silently unfollowing accounts you haven't engaged with in a while, accounts that have gone dormant, and so on. Twitter and Facebook offer methods like muting to reduce what we see from people without unfriending or unfollowing, but it's a lot of work, and frankly I feel like a coward using any of those.

Messaging apps, by virtue of focusing on direct communication between two people or among groups, naturally achieve this by pushing the threads with the latest messages to the top of their application windows. People who fall out of our lives just fall off the bottom of the screen. LIFO has always been a reasonably effective general purpose relevance heuristic.

Another possible solution to the graph design problem is to decouple a users content feed from their social graph. In my three pieces on TikTok, I wrote about how that app's architecture is fundamentally different from that of most Western social media. TikTok doesn't need you to follow any accounts to construct a relevant feed for you. Instead, it does two things.

First, it tries to understand what interests you by observing how you react to everything it shows you. It tries to learn your taste, and it does a damn good job of it. TikTok is an interest graph built as an interest graph.

Secondly, TikTok runs every candidate video through a two-stage screening process. First, it runs videos through one of the most terrifying, vicious quality filters known to man: a panel of a few hundred largely Gen Z users.Okay, yes, that's not quite right. Anyone can be on this test audience for a video. It just happens, however, that TikTok's user base skews younger, so most of the people on that panel will be Gen Z. Also, it's a known fact that a pack of Gen Z users muttering "OK Boomer" is the most terrifying pack hunter in the animal kingdom after hyenas and murder hornets. If those test viewers don't show any interest, the video is yeeted into the dustbin of TikTok, never to be seen again except if someone seeks it out directly on someone's profile.

Secondly, it then uses its algorithm to decide whether that video would interest each user based on their taste profile. Even if you don't follow the creator of a video, if TikTok's algorithm thinks you'll enjoy it, you'll see it in your For You Page.

Recently, Instagram announced it would start showing its users posts from accounts they don't follow. In many ways, this is as close to a concession as we'll see from Instagram to the superiority of TikTok's architecture for pure entertainment.

Some apps use some sort of topic or content picker. Tell us what music or film genres you like. What news topics interest you. Then they try to use machine learning and signals from their entire user base to serve you a relevant feed.

The effectiveness of this approach varies widely. Why does a playlist generated off a single song on Spotify work so well and yet its podcast recommendations feel generic? Why, after spending years and millions of dollars on research, including the fabled Netflix prize, do Netflix's recommendations still feel generic, and why doesn't it really matter? Why are book recommendations on Amazon solid while article recommendations on news sites feel random? It would take an entire separate piece just to dig into why some content recommendations work so much better than others, so complex is the topic.

In this piece focused on graph design, what matters is that things like content pickers explicitly veer away from the social graph. Twitter allowing you to follow topics in addition to accounts can be seen as one attempt to move a half step towards being a pure interest graph.

It's not that apps can't be more fun when social, or that people don't share some overlapping interests with people they know. We all care both our interests and the people in our lives. When they overlap, even better. It's just that after more than a decade of living with our current social apps, we have ample case studies illustrating the downsides of assuming they are perfectly correlated.

A secondary consideration is what type of interaction an application is building towards in the long run. Is is about one-to-one interactions or broadcasting to large audiences? What percentage of your users do you want creating as opposed to just consuming? Is your app best served by a graph of people who know each other in real life or by a graph that connects strangers who share common interests? Or some mix of both? Is your app for people from the same company or organization? Will the interactions cut across cultures and national borders, or is it best if various geographies are segregated into their own graphs?

The next generation of social product teams can and should be more proactive about thinking through what type of social graph will offer the best user experience in the long run.

I'm not certain, but it doesn't feel, based on the histories I've heard, that many social networks built their graphs with a particular design in mind. This makes graph design an exercise with more open questions than answers. In some ways, Facebook being built for just Harvard students in the beginning may have imposed some helpful graph design constraints by chance.

Unlike some types of design, graph design doesn't lend itself easily to prototyping. Social networks are at least in part complex adaptive systems, making it difficult to prototype what types of interactions will occur if and when the graph achieves scale.

But whereas traditional complex adaptive systems are so complex that predictions are futile, social networks are different in two ways. One is that human nature is consistent. The second is that we have numerous super scaled social networks to study. They're massive real world test cases for what happens when you make certain choices in graph design.

They also exist in multiple markets around the world. This makes it possible to study distinct path dependencies, especially when comparing across cultures and market conditions as unique as China versus the U.S. Despite all the variations in context, issues like trolling seem universal, suggesting that some potent underlying mechanisms are at work.

Once you tug on the threads surrounding graph design, you can burrow deep down many rabbit holes. If the people connected are going to be complete strangers, how will you establish sufficient trust (e.g. through a reputation system)? If the trunk of the app is a content feed, does that feed have to draw exclusively from stories posted by accounts followed by the user? Does it have to pluck candidates from those accounts at all? Is a feed even the right architecture for healthy interactions among your users?

Whose job is it to consider the problem of graph design? And when? To take one example, growth team strategies should be informed by your graph design. Growth shouldn't be treated as a rogue team whose only job is to extend the graph in every possible direction. They need to know what both good and harmful graph growth looks like so they can craft strategies more aligned with the long term vision.

Recently, TikTok started pushing me to connect more with people I know IRL. I've gotten prompts asking me to follow people I may know, and now when I share videos with people, I often get a notification telling me they've watched the video I've shared. Often these notifications are the only way I know they even have a TikTok account and what their username is.

To date, I've enjoyed TikTok without really following any people I know IRL. Perhaps TikTok is trying to make sharing of its videos endogenous to the app itself. But by this point in my piece, it should be obvious that I consider any changes to the graph of any social product to be moves that should be treated with greater caution. Most people I know don't make any TikToks (I know, I know, this is how you can tell I'm old), so following them won't impact my FYP much. For a younger cohort, where users make TikToks at a much higher rate, following each other may make more sense.

On the other hand, any app with a default public graph structure plays into the innate human impulse to judge. Wait, this person I know follows which accounts on TikTok?! Tsk tsk.

The answer to whether TikTok should push its users to replicate their real world social graphs isn't cut and dried. I bring it up only to illustrate that graph design is a discipline that requires deeper consideration. It could use, as its name implies, some design.


The term "follow" is fitting. Who we follow can become a self-fulfilling prophecy. First you build your graph, then your graph builds you. Plenty of research shows that humans tend to oscillate at the same frequency as the people they spend the most time with. Silicon Valley sage Naval Ravikant popularized the 5 Chimp Theory from zoology, which says you can deduce the mood and behavior of any single chimp by observing by which five chimps they hang out with the most.

The social media version of this is that we can predict how any user will behave on an app by the people they follow, the people who follow them, and the "space" they're forced to interact with those people in, be it a Facebook News Feed or Twitter Timeline or other architecture. We all know people who are the worst versions of themselves on social media. The fundamental attribution error predicts we'll think they're terrible by nature when they may just be responding to their environment and incentives.

Humans aren't chimps, we tend to juggle membership in dozens of different social groups at a time. Reed's Law predicts that the utility of networks scales exponentially because not only can each person in a network connect with every other node, but the number of possible subgroups is 2^N-N-1 where N is the number of people in that network.

But whether a social app allows such subgroups to form easily is a design problem. Monolithic feeds tend to force people into larger subgroups than is optimal for healthy interaction. While every user sees a different Twitter Timeline or Facebook News Feed, the illusion is still of a large public commons. Because anyone might see something you post, you should operate as if everyone will.

Messaging apps, in contrast, tend to allow users themselves to form the subgroups most relevant to them. Facebook Groups is a more flexible architecture than News Feed. Humans contain multitudes, and social apps should flex to their various communication privacy needs.

It's no surprise that many tech companies install Slack and then suddenly find themselves, shortly thereafter, dealing with employee uprisings. When you rewire the communications topology of any group, you alter the dynamic among the members. Slack's public channels act as public squares within companies, exposing more employees to each other's thoughts. This can lead to an employee finding others who share what they thought were minority opinions, like reservations about specific company policies. We're only now seeing how many companies operated in relative peace in the past in large part because of the privacy inherent in e-mail as a communications technology.

In many ways, graph design was always bound to be more important in Western social media now, in the year 2021, than in the early days of social media. In the early days of the internet, the public social graph was sparse to non-existent. For the most part, our graphs were limited to the email addresses we knew and the occasional username of someone in our favorite news groups. It's hard to explain to a generation that grew up with the internet what a secret thrill every new connection online was in the early days of the internet. How hard it was to track down someone online if all you knew was their name.

Today, we have more than enough ways to connect to just about anybody in the world. Adding someone to my address book feels almost unnecessary when I can likely reach any person with a smartphone and internet access any of a dozen ways.

In a world where finding someone online is a commodityOne sign that it is a commodity is that messaging apps, while massive, are for the most part lousy businesses that generate little in revenue. That's the financial profile you'd expect of a commodity business., the niftier trick is connecting to the right people in the right context. I have over a dozen messaging apps installed on my phone, they all look roughly the same. While I've discussed graph design largely defensively here—how to avoid mistakes in graph design—the positive view is to use graph design offensively. How do you craft a unique graph whose very structure encodes valuable, and more importantly, unique intelligence?

LinkedIn may be the social app Silicon Valley product people like to grouse about the most, but while many of the complaints are valid, its sizable market cap is testament to the value of its graph. It turns out if you map out the professional graph, not just today but also across long temporal and organizational dimensions, recruiters will pay a lot of money to traverse it.

For all the debate over whether our current social networks are good for society, I prefer to focus on the potential we've yet to realize. We have the miracle of Wikipedia, yes, but aren't there more types of mass scale collaboration to be enabled?

Every other week or so, I am introduced to someone amazing, or an account I've never heard of before that blows me away. That social networks themselves aren't facilitating these introductions leaves me less sad than hopeful. In a decade, today's social graphs will look like blunt instruments, so primitive were their configurations.

We'll also look back over that decade, see how many more amazing people we finally met at the right time and the right context, and realize that indeed, the real treasure was the friends we made along the way.

TikTok and the Sorting Hat

I often describe myself as a cultural determinist, more as a way to differentiate myself from people with other dominant worldviews, though I am not a strict adherent. It’s more that in many situations when people ascribe causal power to something other than culture, I’m immediately suspicious.

The 2010’s were a fascinating time to follow the consumer tech industry in China. Though I left Hulu in 2011, I still kept in touch with a lot of the team from our satellite Hulu Beijing office, many of whom scattered out to various Chinese tech companies throughout the past decade. On my last visit to the Hulu Beijing office in 2011, I was skeptical any of the new tech companies out of China would ever crack the U.S. market.

It wasn’t just that the U.S. had strong incumbents, or that the Chinese tech companies were still in their infancy. My default hypothesis was that what I call the veil of cultural ignorance was too impenetrable a barrier. That companies from non-WEIRD countries (Joseph Henrich shorthand for Western, Educated, Industrialized, Rich, and Democratic) would struggle to ship into WEIRD cultures. I was even skeptical of the reverse, of U.S. companies competing in China or India. The further the cultural distance between two countries, the more challenging it would be for companies in one to compete in the other. The path towards overcoming that seemed to lie in hiring a local leadership team, or sending someone over from the U.S. who understood the culture of that country inside-out.

For the most part, that has held true. China has struggled, for the most part, to make real inroads in the U.S. WeChat tried to make inroads in the U.S. but only really managed to capture Chinese-Americans who used the app to communicate with friends, family, and business colleagues in China.

In the other direction, the U.S. hasn’t made a huge dent in China. Obviously, the Great Firewall played a huge role in keeping a lot of U.S. companies out of the Chinese market, but in the few cases where a U.S. company got a crack at the Chinese market, like Uber China, the results were mixed.

For this reason, I’ve been fascinated with TikTok. Here in 2020, TikTok is, for many, including myself, the most entertaining short video app going. The U.S. government is considering banning the app as a national security risk, and while that’s the topic du jour for just about everyone right now, I’m much more interested in tracing how it got a foothold in markets outside of China, especially the U.S. with its powerful incumbents.

They say you learn the most from failure, and in the same way I learn the most about my mental models from the exceptions. How did an app designed by two guys in Shanghai managed to run circles around U.S. video apps from YouTube to Facebook to Instagram to Snapchat, becoming the most fertile source for meme origination, mutation, and dissemination in a culture so different from the one in which it was built?

The answer, I believe, has significant implications for the future of cross-border tech competition, as well as for understanding how product developers achieve product-market-fit. The rise of TikTok updated my thinking. It turns out that in some categories, a machine learning algorithm significantly responsive and accurate can pierce the veil of cultural ignorance. Today, sometimes culture can be abstracted.


TikTok's story begins in 2014, in Shanghai. Alex Zhu and Luyu “Louis” Yang had launched an educational short-form video app that hadn’t gotten any traction. They decided to pivot to lip-synch music videos, launching Musical.ly in the U.S. and China. Ironically, the app got more traction across the Pacific Ocean, so they killed their efforts in their home country of China and focused their efforts on their American market.

The early user base consisted mostly of American teenage girls. Finally, an app offered users the chance to lip synch to the official version of popular songs and have those videos distributed to an audience for social feedback.

That the app got any traction at all was progress. However, it presented Alex, Louis, and their team with a problem. American teen girls were not exactly an audience Alex and Louis really understood.To be fair, most American parents would argue they don't understand their teenage daughters either.

During this era where China and the U.S. tech scenes have overlapped, the Chinese market has been largely impenetrable to the U.S. tech companies because of the Great Firewall, both the software instance and the outright bans from the CCP. But in the reverse direction, America has been almost as impenetrable to Chinese companies because of what might be thought of as America’s cultural firewall. Outside of DJI in dronesI'd argue one reason DJI had success in America was that drone control interfaces borrow heavily from standard flight control interfaces and are not culturally specific. Thus DJI could lean on its hardware prowess which was formidable., I can’t think of any Chinese app making real inroads in the U.S. prior to Musical.ly. To build on its early traction, Musical.ly would have to overcome this cultural barrier.

It’s been said that if you ask your customers what they want, they’ll ask for a faster horse (attributed to Henry Ford, though that may not be true). Frankly, that’s always been half horses***, and not just because horses are involved. First of all, what if your customers are horse jockeys?

Secondly, while you can’t listen to your customers exclusively, paying attention to them is a dependable way to build a solid SaaS business, and even in the consumer space it provides useful signal. As I’ve written about before, customers may tell you they want a faster horse, and what you should hear is not that you should be injecting your horses with steroids but that your customers find their current mode of transportation, the aforementioned horse, to be too slow a means of getting around.

Alex and Louis listened to Musical.ly’s early adopters. The app made feedback channels easy to find, and the American teenage girls using the app every day were more than willing to speak up about what they wanted to ease their video creation. They sent a ton of product requests, helping to inform a product roadmap for the Musical.ly team. That, combined with some clever growth hacks, like allowing watermarked videos to easily be downloaded and distributed via other networks like YouTube, Facebook, and Instagram, helped them achieve hockey-stick inflection among their target market.

Still, Musical.ly ran into its invisible asymptote eventually. There are only so many teenage girls in the U.S. When they saturated that market, usage and growth flatlined. It was then that a suitor they had rebuffed previously, the Chinese technology company Bytedance, suddenly looked more attractive, like Professor Bhaer to Jo March at the end of Little Women. In a bit of dramatic irony, Bytedance had cloned Musical.ly in China with an app called Douyin, one that had taken off in China, and now Bytedance was buying the app that inspired it, Musical.ly, an app conceived and built in China but that had failed in China and instead gotten traction in the U.S.

After the purchase, Bytedance rebranded Musical.ly as TikTok. Still, if that’s all they had done, it’s not clear why the app would’ve broken out of its stalled growth to the stunning extent it has under its new owner. After all, Bytedance paid just $1B for an app that’s rumored to sell now, if the U.S. government approves the transaction, for anywhere from $30 to $70B.

Bytedance did two things in particular to jumpstart TikTok’s growth.

First, it opened up its wallet and started spending on user acquisition in the U.S. the way wealthy Chinese used to spend on American real estate (no, I’m not still bitter at all the Chinese all-cash offers that trounced me repeatedly when condo-hunting six years ago). TikTok was rumored to have been spending a staggering eight or nine figures a month on advertising.

The ubiquity of TikTok ads lent the theory credence. I saw TikTok ads everywhere, on YouTube, Instagram, Twitter, Facebook, and in mobile gamesTikTok ads are bizarre. The video ads I see for the app in mobile games convey nothing about what the app is or does. One ad I've seen dozens of times has an old lady doing lunges in her living room, another has a kid blow drying his hair, and as he does, his hair changes colors. I feel like the ads could do a better job of selling the app, but what do I know?. If Bytedance could have purchased ads on the back of my eyelids at sub $20 CPMs I don’t doubt they would have done so.

It didn’t look like a wise investment at first. Rumors abounded that the 30-day retention of all those new users poured into the top of its funnel was sub 10%. They seemed to be lighting ad dollars on fire.

Ultimately, the ROI on that spend would turn the corner, but only because of the second element of their assault on the US market, the most important piece of technology Bytedance introduced to TikTok: the updated For You Page feed algorithm.

Bytedance has an absurd proportion of their software engineers focused on their algorithms, more than half at last check. It is known as the algorithm company, first for its breakout algorithmic “news” app Toutiao, then for its Musical.ly clone Douyin, and now for TikTok.

Prior to TikTok, I would’ve said YouTube had the strongest exploit algorithm in video,The exploit versus explore conundrum is sort of a classic of algorithmic design, usually mentioned in relation to the multi-armed bandit problem. For the purposes of this discussion, think of it simply as the problem of choosing which videos to show you. An exploit algorithm will give you more of what you like, while an explore algorithm tries to broaden your exposure to more than just what you’ve shown you like. YouTube is often described as an exploit algorithm because it tends to really push more of what you like, and then before you know it, you’re looking at some alt-right video that’s trying to redpill you. but in comparison to TikTok, YouTube’s algorithm feels primitive (the top creators on YouTube have long ago figured out how to game YouTube’s algorithm’s heavy dependence on click-through rates and watch time, one reason so many YouTube videos are lengthening over time, much to my dismay).

Before Bytedance bought Musical.ly and rebranded it TikTok, its Musical.ly clone called Douyin was already a sensation in the Chinese market thanks in large part to its effective algorithm. A few years ago, on a visit to Beijing, I caught up with a bunch of former colleagues from Hulu Beijing, and all of them showed me their Douyin feeds. They described the app as frighteningly addictive and the algorithm as eerily perceptive. More than one of them said they had to delete the app off their phone for months at a time because they were losing an hour or two every night just lying in bed watching videos.

That same trip, I had coffee with an ex-Hulu developer who now was now a senior exec in the Bytedance engineering organization. Of course, he was tight-lipped about how their algorithm worked, but the scale of their infrastructure dedicated to their algorithms was clear. On my way in and out of this office, just one of several Bytedance spaces all across the city, I gawked at hundreds of workers sitting side by side in row after row in the open floorplan. It resembled what I’d seen at tech giants like Facebook in the U.S., but even denser.The mood was giddy. I could tell he was doing well. He took me and my friends to a Luckin Coffee in their office basement and told us to order drinks off an app on his phone. I reached in my pocket for some RMB to pay for the drinks and he put his hand on my arm to stop me. “Don’t worry, I can afford this,” he said, laughing. He didn’t mean it in a boasting manner, he seemed almost sheepish about how well they were doing. Afterwards, as we waited outside the office in their parking lot, he walked past and asked me if I needed a ride. No, I said, I’d be taking the subway. A Tesla Model X pulled up, the valet hopped out, and he jumped in and drove off.

It’s rumored that Bytedance examines more features of videos than other companies. If you like a video featuring video game captures, that is noted. If you like videos featuring puppies, that is noted. Every Douyin feed I examined was distinctive. My friends all noted that after spending only a short amount of time in the app, it had locked onto their palate.

That, more than anything else, was the critical upgrade Bytedance applied to Musical.ly to turn it into TikTok. Friends at Bytedance claimed, with some pride, that after they plugged Musical.ly, now TikTok, into Bytedance’s back-end algorithm, they doubled the time spent in the app. I was skeptical until I asked some friends who had some data on the before and after. The step change in the graph was anything but subtle.

At the time Musical.ly got renamed TikTok, it was still dominated by teen girls doing lip synch videos. Many U.S. teens at the time described TikTok as “cringey,” usually a kiss of death for networks looking to expand among youths, fickle as they are about what’s cool. Scrolling the app at the time felt like eavesdropping on the theater kids clique from high school. Entertaining, but hardly a mainstream entertainment staple.

That’s where the one-two combination of Bytedance’s enormous marketing spend and the power of TikTok’s algorithm came to the rescue. To help a network break out from its early adopter group, you need both to bring lots of new people/subcultures into the app—that’s where the massive marketing spend helps—but also ways to help these disparate groups to 1) find each other quickly and 2) branch off into their own spaces.

More than any other feed algorithm I can recall, Bytedance’s short video algorithm fulfilled these two requirements. It is a rapid, hyper-efficient matchmaker. Merely by watching some videos, and without having to follow or friend anyone, you can quickly train TikTok on what you like. In the two sided entertainment network that is TikTok, the algorithm acts as a rapid, efficient market maker, connecting videos with the audiences they’re destined to delight. The algorithm allows this to happen without an explicit follower graph.

Just as importantly, by personalizing everyone’s FYP feeds, TikTok helped to keep these distinct subcultures, with their different tastes, separated. One person’s cringe is another person’s pleasure, but figuring out which is which is no small feat.

TikTok’s algorithm is the Sorting Hat from the Harry Potter universe. Just as that magical hat sorts students at Hogwarts into the Gryffindor, Hufflepuff, Ravenclaw, and Slytherin houses, TikTok’s algorithm sorts its users into dozens and dozens of subculturesThe Sorting Hat is perhaps the most curious plot device from the Harry Potter universe. Is it a metaphor for genetic determinism? Did Draco have any hope of not being a Slytherin? By sorting Draco into that house, did it shape his destiny? Is the hat a metaphor for the U.S. college admissions system, with all its known biases? Is Harry Potter, sorted into Gryffindor, a legacy admit?. Not two FYP feeds are alike.

For all the naive and idealistic dreams of the so-called “marketplace of ideas,” the first generation of large social networks has proven mostly unprepared and ill-equipped to deal with the resulting culture wars. Until they have some real substantial ideas and incentives to take on the costly task of mediating between strangers who disagree with each other, they’re better off sorting those people apart. The only types of people who enjoy being thrown into a gladiatorial online arena together with those they disagree with seem to be trolls, who benefit asymmetrically from the resultant violence.

Consider Twitter's content moderation problems. How much of that results from throwing liberals and conservatives together in a timeline together? Twitter employees speak often about wanting to improve public discourse, but they’d be much better off (and society, too) keeping the Slytherins and Gryffindors apart until they have some real substantive ideas to solve the problem of low trust conversation.The same can be said of NextDoor and their problem of racist reporting of minorities just walking down the sidewalk. They’d be better off just removing that feature. At some point, NextDoor needs to face the fact that they aren’t going to solve racism. Tweak that feature all you want, put up all the hoops for users to jump through to file such a report, but adverse selection ensures that those most motivated to jump through them are the racist ones.

After some time, new subcultures did indeed emerge on TikTok. No longer was it just teenage girls lip-synching. There are so many subcultures on TikTok I can barely track them because I only ever see a portion of them in my personalized FYP. This broadened TikTok’s appeal and total addressable market. Douyin had followed that path in China, so Bytedance at least had some precedent for committing to such an expensive bet, but I wasn’t certain if it would work in the U.S., a much more competitive media and entertainment market.

Within a larger social network, even subcultures need some minimum viable scale, and though Bytedance paid dearly to fill the top of the funnel, its algorithm eventually helped assemble many subcultures surpassing that minimum viable scale. More notably, it did so with amazing speed.

Think of how most other social networks have scaled. The usual path is organic. Users are encouraged to follow and friend each other to assemble their own graph one connection at a time. The challenge with that is that it’s almost always a really slow build, and you have to provide some reason for people to hang around and build that graph, often encapsulated by the aphorism “come for the tool, stay for the network.” Today, it’s not as easy to build the “tool” part when so much of that landscape has already been mined and when scaled networks have learned to copy any tool achieving any level of traction.In the West, Facebook is the master of the fast follow. They struggle to launch new social graphs of their own invention, but if they spot any competing social network achieve any level of traction, they will lock down and ship a clone with blinding speed. Good artists borrow, great artists steal, the best artists steal the most quickly? Facebook as a competitor reminds me of that class of zombies in movies that stagger around drunk most of the time, but the moment they spot a target, they sprint at it like a pack of cheetahs. The type you see in 28 Days Later and I Am Legend. Terrifying.

Some people still think that a new social network will be built around a new content format, but it’s almost impossible to think of a format that couldn’t be copied in two to three months by a compact Facebook team put in lockdown with catered dinners. Yes, a new content format might create a new proof of work, as I wrote about in Status as a Service, but just as critical is building the right structures to distribute such content to the right audience to close the social feedback loop.

What’s the last new social network to achieve scale in recent years? You probably can’t think of any, and that’s because there really aren’t any. Even Facebook hasn’t been able to launch any really new successful social products, and a lot of that is because they also seem fixated on building these things around some content format gimmick.

Recall the three purposes which I used to distinguish among networks in Status as a Service: social capital (status), entertainment, and utility. In another post soon I promise to explain why I classify networks along these three axes, but for now, just know that while almost all networks serve some mix of the three, most lean heavily towards one of those three purposes.

3-axis.png

A network like Venmo or Uber, for example, is mostly about utility: I need to pay someone money, or I need to travel from here to there. A network like YouTube is more about entertainment. Amuse me. And some networks, what most people refer to when they use the generic term “social network,” are more focused on social capital. Soho House, for example.

TikTok is less a pure social network, the type focused on social capital, than an entertainment network. I don’t socialize with people on TikTok, I barely know any of them. It consists of a network of people connected to each other, but they are connected for a distinct reason, for creators to reach viewers with their short videos.Bytedance hasn't been successful in building out a social network to compete with WeChat, though it's not for lack of trying. I think they have a variety of options for doing so, but as with many companies that didn't begin as social first, it's not in their DNA. Facebook is underrated for its ability to build functional social plumbing at scale, that is a rare design skill. Companies as diverse as Amazon and Netflix have tried building social features and then later abandoned them. I suspect they tried when they didn't have enough users to create breakaway social scale, but it's difficult to imagine them pulling that off without more social DNA. But having a social-first DNA also means that Facebook isn't great at building non-social offerings. Their video or watch tab remains a bizarre and unfocused mess.

One can debate the semantics of what constitutes a social network forever, but what matters here is realizing that another way to describe an entertainment network is as an interest network. TikTok takes content from one group of people and match it to other people who would enjoy that content. It is trying to figure out what hundreds of millions of viewers around the world are interested in. When you frame TikTok's algorithm that way, its enormous unrealized potential snaps into focus.

The idea of using a social graph to build out an interest-based network has always been a sort of approximation, a hack. You follow some people in an app, and it serves you some subset of the content from those people under the assumption that you’ll find much of what they post of interest to you. It worked in college for Facebook because a bunch of hormonal college students are really interested in each other. It worked in Twitter, eventually, though it took a while. Twitter's unidirectional follow graph allowed people to pick and choose who to follow with more flexibility than Facebook's initial bi-directional friend model, but Twitter didn't provide enough feedback mechanisms early on to help train its users on what to tweet. The early days were filled with a lot of status updates of the variety people cite when criticizing social media: "nobody cares what you ate for lunch."I talk about Twitter's slow path to product market fit in Status as a Service

But what if there was a way to build an interest graph for you without you having to follow anyone? What if you could skip the long and painstaking intermediate step of assembling a social graph and just jump directly to the interest graph? And what if that could be done really quickly and cheaply at scale, across millions of users? And what if the algorithm that pulled this off could also adjust to your evolving tastes in near real-time, without you having to actively tune it?

The problem with approximating an interest graph with a social graph is that social graphs have negative network effects that kick in at scale. Take a social network like Twitter: the one-way follow graph structure is well-suited to interest graph construction, but the problem is that you’re rarely interested in everything from any single person you follow. You may enjoy Gruber’s thoughts on Apple but not his Yankees tweets. Or my tweets on tech but not on film. And so on. You can try to use Twitter Lists, or mute or block certain people or topics, but it’s all a big hassle that few have the energy or will to tackle.

Think of what happened to Facebook when it’s users went from having their classmates as friends to hundreds and often thousands of people as friends, including coworkers, parents, and that random person you met at the open bar at a wedding reception and felt obligated to accept a friend request from even though their jokes didn’t seem as funny the next morning in the cold light of sobriety. Some have termed it context collapse, but by any name, it’s an annoyance everyone understands. It manifests itself in the declining visit and posting frequency on Facebook across many cohorts.

Think of Snapchat’s struggles to differentiate between its utility— as a way to communicate among friends—and its entertainment function as a place famous people broadcast content to their fans. In a controversial redesign, Snapchat cleaved the broadcast content from influencers into the righthand Discover tab, leaving your conversations with friends in the left Chat pane. Look, the redesign seemed to say, Kylie Jenner is not your friend.

TikTok doesn’t bump into the negative network effects of using a social graph at scale because it doesn't really have one. It is more of a pure interest graph, one derived from its short video content, and the beauty is its algorithm is so efficient that its interest graph can be assembled without imposing much of a burden on the user at all. It is passive personalization, learning through consumption. Because the videos are so short, the volume of training data a user provides per unit of time is high. Because the videos are entertaining, this training process feels effortless, even enjoyable, for the user.

I like to say that “when you gaze into TikTok, TikTok gazes into you.” Think of all the countless hours product managers, designers and engineers have dedicated to growth-hacking social onboarding—goading people into adding friends and following people, urging them to grant access to their phone contact lists—all in an attempt to carry them past the dead zone to the minimum viable graph size necessary to provide them with a healthy, robust feed. (sidenote: Every social product manager has heard the story of Facebook and Twitter’s keystone metrics for minimum viable friend or follow graph size countless times.) Think of how many damn interest bubble UI’s you’ve had to sit through before you could start using some new social product: what subjects interest you? who are your favorite musicians? what types of movies do you enjoy?The last time I tried to use Twitter’s new user onboarding flow, it recommended I follow, among other accounts, that of Donald Trump. There are countless ways they could onboard people more efficiently to provide them with a great experience immediately, but that is not one of them.

TikTok came along and bypassed all of that. In a two-sided entertainment marketplace, they provide creators on one side with unmatched video creation tools coupled with potential super-scaled distribution, and viewers on the other side with an endless stream of entertainment that gets more personalized with time. In doing so, TikTok, with a product team and infrastructure mostly located in China, came out of left field and became a player in the attention marketplace on the same playing fields around the world as giants like Facebook, Instagram, Snapchat, YouTube, and Netflix. Not quite a Cinderella story...maybe a Mulan story?

TikTok didn't just break out in America. It became unbelievably popular in India and in the Middle East, more countries whose cultures and language were foreign to the Chinese Bytedance product teams. Imagine an algorithm so clever it enables its builders to treat another market and culture as a complete black box. What do people in that country like? No, even better, what does each individual person in each of those foreign countries like? You don't have to figure it out. The algorithm will handle that. The algorithm knows.

I don’t think the Chinese product teams I’ve met in recent years in China are much further ahead than the ones I met in 2011 when it comes to understanding foreign cultures like America. But what the Bytedance algorithm did was it abstracted that problem away.One of the concerns about CCP ties with Bytedance is that they might use it as a propaganda tool against the U.S. I tend to think that problem is overrated because my sense is that many in China still don't understand the nuances of American culture, just as America doesn't understand theirs (though I speak Mandarin, some of the memes on Douyin fly way over my head). However, perhaps an algorithm that abstracts culture into a series of stimuli responses makes it more dangerous?

Now imagine that level of hyper efficient interest matching applied to other opportunities and markets. Personalized TV of the future? Check. Education? I already find a lot of education videos in my TikTok feed, on everything from cooking to magic to iPhone hacks. Scale that up and Alex and Louis might finally realize their dream of a short video education app that they set out to build before Musical.ly.

Shopping? A slam dunk, Douyin and Toutiao already enable a ton of commerce in China. Job marketplace? A bit of a stretch, but not impossible. If Microsoft buys TikTok, I’d certainly give the TikTok team a crack at improving my LinkedIn feed, which, to be clear, is horrifying. What about personalized reading, from books to newsletters to blogs? Music? Podcasts? Yes, yes, yes please. Dating? The world could absolutely use an alternative to the high GINI co-efficient, high inequality dating marketplace that is Tinder.

Douyin already visualizes much of this future for us with its much broader diversity of videos and revenue models. In China, video e-commerce is light years ahead of where it is in the U.S. (for a variety of reasons, but none that aren’t surmountable; a topic, again, for another piece). Whereas TikTok can still feel, to me, like a pure entertainment time-killer, Douyin, which I track on a separate phone I keep just to run Chinese apps for research purposes, feels like much more than that. It feels like a realization of short video as a broad use case platform.

There’s a reason that many people in the U.S. today describe social media as work. And why many, like me, have come to find TikTok a much more fun app to spend time in. Apps like Facebook, Instagram, and Twitter are built on social graphs, and as such, they amplify the scale, ubiquity, and reach of our performative social burden. They struggle to separate their social functions from their entertainment and utility functions, injecting an aspect of social artifice where it never used to exist.

Facebook has struggled with its transition to utility, which would’ve offered it a path towards becoming more of a societal operating system the way WeChat is in China. To be fair, the competition for many of those functions is much stiffer in the U.S. In payments, for example, Facebook must compete with credit cards, which work fine and which most people default to in the U.S., whereas in China AliPay and WeChat Pay were competing with a cash-dominant culture. Still, in the U.S., Facebook has yet to make any real inroads in significant utility use cases like commerce.I speak so often about how much video as a medium is underrated by tech elites. In an alternate history of Facebook, they would've made a harder shift to becoming a video-only app, moving up the ladder from text to photos to videos, and maybe they would've become TikTok before TikTok. If they had, I think their time spent figures would be even higher today. For as quickly as Facebook moved to disrupt itself in the past, there's a limit to how far they're willing to go. I plan to compare the Chinese and U.S. tech ecosystems in a future post, and one of the broadest and most important takeaways is that China leapfrogged the U.S. in the shift to video, among many other things. This doesn't mean the U.S. won't then leapfrog China the next time around, but for now, the U.S. is the trailing frog in several categories.

Instagram is some strange hybrid mix of social and interest graph, and now it’s also a jumble of formats, with a Stories feed relegated to a top bar in the app while the more stagnant and less active original feed continues to run vertically as the default. Messaging is pushed to a separate pane and also served by a separate app. Longer form videos bounce you to Instagram TV, which is just an app for videos that exceed some time limit, I guess? And soon, perhaps commerce will be jammed in somehow? Meanwhile, they have a Discover tab, or whatever it is called, which seems like it could be the default tab if they wanted to take a more interest-based approach like TikTok. But they seem to have punted on making any hard decisions for so long now that the app is just a Frankenstein of feeds and formats and functions spread across a somewhat confused constellation of apps.

Twitter has never seemed to know what it is. Ask ten different Twitter employees, you’ll hear ten different answers. Perhaps that’s why the dominant product philosophy of the company seems to be a sort of constant paralysis broken up by the occasional crisis mitigation. One reason I’ve long wished Twitter had just become a open protocol and let the developer community go to town is that Twitter moves. At. A. Snail's. Pace.

The shame of it is that Twitter had a head start on an interest graph, largely through the work of its users, who gave signal on what they cared about through the graphs they assembled. That could have been a foundation to all sorts of new markets for them. They could’ve even been an interest-based social network, but instead users have mostly extracted that value themselves by pinging each other through the woefully neglected DM product.Of course, Twitter also once purchased Vine and then let it wither on the, uh vine. Of all the tech companies that could purchase TikTok, maybe Twitter is the one that least deserves it. At a minimum, they should be required to submit a book report showing they understand what it is they're buying.

A few other tech companies are worth mentioning here. YouTube is a massive video network, but honestly they may have shipped even less than Twitter over the years. That they don’t have any video creation tools of note (do they have any?!) and allowed TikTok to come in and steal the short video space is both shocking and not.

Amazon launched a short video commerce app some time ago. It came and went so quickly I didn’t even have time to try it. Though Amazon is good at many things, they just don’t have the DNA to build something like TikTok. That they have failed to realize the short video commerce vision that China led the way on is a shocking miss on their part.

Apple owns the actual camera that so many of these videos are shot on, but they've never understood social.iMessages could be a social networking colossus if Apple had the social DNA, but every day other messaging apps pull further away in functionality and design. But I guess they're finally adding threading in iMessages with the next iOS release? Haha. At least they'll continue to improve the camera hardware with every successive iPhone release.

None of this is to say TikTok is anywhere near the market value of any of these aforementioned American tech giants. If you still think of it as a novelty meme short video app, you're not far from the truth.Are there flaws with TikTok? Of course. It’s far from perfect. The algorithm can be too clingy. Sometimes I like one video from some meme and the next day TikTok serves me too many follow-up videos from the same meme. But the great thing about a hyper-responsive algorithm is that you can tune it quickly, almost like priming GPT-3 to get the results you want. Often all it takes to inject some new subculture into your TikTok feed is to find some video from it (you can easily find them on YouTube or via friends whose feeds are different from your own) and like it. Another problem for TikTok is that a lot of other use cases are being jammed into what was designed to be a portrait mode lip synch video app. Vertical video is good for the human figure, for dance and makeup videos, but not ideal for other types of communication and storytelling (I still hate when basketball and football highlight clips can’t show more of the horizontal playing field, and that goes for both IG and TikTok; in many highlights of Steph Curry hitting a long 3 you can’t see him, or the basket, only one of the two, lol). Stepping up a level, the list of opportunities Bytedance and TikTok have yet to capitalize on in the U.S. is long, and it wouldn’t surprise me if they miss many of them even if they stave off a ban from the U.S. government. Much of it would require new form factors, and it’s unclear how strong the TikTok product team would be, especially if divested out of Bytedance. Under Microsoft, a company with a fairly shaky history in the consumer market, it's unclear that their full potential would be realized.

Still, none of that product work is rocket science. Much of it seems clear in my head. More importantly, TikTok, if armed with the Bytedance algorithm as part of a divestment, has a generalized interest-matching algorithm that can allow it to tackle U.S. tech giants not head on but from an oblique angle. To see it as merely a novelty meme video app for kids is to miss what its much greater disruptive potential. That an app launched out of China could come to the U.S. and sprint into cultural relevance in this attention marketplace should be a wake-up call to complacent U.S. tech companies. Given how many of those companies rely on intuiting user interests to sell them things or to show them ads, a company like TikTok which found a shortcut to assembling such an interest graph should raise all sorts of alarm bells.

It surprises me that more U.S. tech companies aren’t taking a harder run at trying to acquire TikTok if the rumored CFIUS hammer stops short of an outright ban. I can’t think of any of them that shouldn’t be bidding for what is a once-in-a-generation forced fire sale asset. I’ve seen prices like $30B tossed around online. If that’s true, it’s an absolute bargain. I’d easily pay twice that without a second thought.

I could cycle through my long list of nits, but ultimately they are all easily solvable with the right product vision and execution. TikTok has figured out the hardest piece, the algorithm. With it, a massive team made up mostly by people who’ve never left China, and many who never will, grabbed massive marketshare in cultures and markets they’d never experienced firsthand. To a cultural determinist like myself, that feels like black magic.


On that same trip to China in 2018 when I visited Bytedance, an ex-colleague of mine from Hulu organized a visit for me to Newsdog. It was a news app for the Indian market built by a startup headquartered in Beijing. As I exited the elevator into their lobby, I was greeted by a giant mural of Jeff Bezos’ famous saying “It’s Always Day One” on the opposite wall.

A friend of a friend was the CEO there, and he sat me down in a conference room to walk me through their app. They had raised $50M from Tencent just a few months earlier that year, and they were the number one news app in India at the time.

He opened the app on his phone and handed it to me. Similar to Toutiao in China, there were different topic areas in a scrollbar across the top, with a vertical feed of stories beneath each. All of these were stories selected algorithmically, as is the style of Toutiao and so many apps in China.

I looked through the stories, all in Hindi (and yes, one feed that contained the thirst trap photos of attractive Indian girls in rather suggestive outfits standing under things like waterfalls; some parts of culture are universal). Then I looked up from the app and through the glass walls of the conference room at an office filled with about 40 Chinese engineers, mostly male, tapping away on their computers. Then I looked back down at page after page of Hindi stories in the app.

“Wait,” I asked. “Do you have people in this office or at the company who know how to read Hindi?”

He looked at me with a smile.

“No,” he said. “None of us can read any of it.”


NEXT POST: Part II of my thoughts on TikTok, on how the app design is informed by its algorithm and vice versa in a virtuous circle.

The John Wick Universe is Cancel Culture

“Si vis pacem para bellum”

translated

“If you want peace, prepare for war”

I hadn’t planned on seeing John Wick 3 - Parabellum, but out for a walk in Stockholm in May, I got caught in a sudden downpour without an umbrella. I was in Sweden for the first time thanks to an invitation from the Spotify product team and had decided to spend some of my downtime seeing the city. Sweden, by the way, is the country with the second highest unicorns per capita. Fascinating, and a topic for another day. I sprinted out of the rain and into the nearest building, which happened to be a movie theater. Checking Dark Sky on my phone, the rain didn’t look to let up for another hour or two, so I scanned the theater listings and found a film in English. John Wick 3: Parabellum it was.

Like many I enjoyed the first John Wick movie for its lean and elegant plot and balletic fight choreography. Keanu Reeves was inspired casting given his unfussy acting style. However, I thought the sequel was unnecessary. I wasn’t expecting much from yet another entry, the third, but I rarely regret spending two hours in a darkened theater. Watching an American film in the company of a Swedish audience also promised to be a form of cultural field work, and on that front, I felt fortunate the house was packed with locals.

John Wick 3 - Parabellum begins directly after the events of the previous film, and at first, all seemed familiar. But after having spent two films worth of time in this universe already, sometime midway through the third film, it dawned on me. The rules of this film franchise mapped with uncanny precision to something that everyone had been complaining about to me for years now: cancel culture.

With that, the films took on heightened resonance. Here I present my theory of John Wick Universe as an allegory of cancel culture.

[SPOILER ALERT: Here is where I must warn people who haven’t seen the films that I will reveal key plot points to the three Wick films below. I don’t feel like the charms of this film series lie in the plot details—what happens isn’t surprising in the least to even the most casual of action film fans—but I disagree with those who say spoiler culture has ruined film criticism. Instead I’m happy to let my readers choose their own acceptable quota of narrative novelty. If you prefer not to learn the plots of the John Wick films, stop reading here.]

Wick’s character motivation can be described thus: my name is John Wick. You stole my car. You killed my dog. Prepare to die.

Reeves plays Wick from cinema’s storied tradition of zen-like hit men, almost placid in their mastery of their craft, which, in his case, is the violent dispatch of other humans from the realm of the living. This is Alain Delon in Le Samourai, Robert De Niro in Heat, Jean Reno as Victor "The Cleaner" in La Femme Nikita. Less sexual than Bond, not quite as overtly cruel as Matz and Jacamon’s Killer. These hit men have a heart, but their highest order bit is the code by which they live. Whether personal or business, there's little difference, the job is killing.

And kill he does. In John Wick 3: Parabellum the signature choreography of death remains, a style which can only be described as baroque. Not John Wick for a single gunshot to the head when he can first maim with a few amuse bouche bullets to the torso and limbs. Why engage in a simple fist fight when one can hold a confrontation in a store filled with display cases lined with all manner of knives (in case of emergency, break glass with the skull of your combatant). Why simply perforate assailants with automatic weapons when they can be simultaneously be relieved of their genitals by an attack dog?

It wasn’t until Michael Bay’s terrible 6 Underground on Netflix that I saw a film with more cartoonish violence this year.

For some, this is entertainment enough. I’ll never hesitate to offer my opinions on any piece of entertainment, but I do not begrudge anyone their pleasures. Certainly, the crowd of Swedes who laughed and cheered at the escalating violence seemed more than entertained. For me, however, films are even more compelling when they speak to the world outside the edges of the screen. I'm nothing if not a sucker for subtext. What fascinated me about John Wick was how its absurdist universe acted as a wry commentary on cancel culture.

Do I think this subtext was intentional? Doubtful. Some filmmakers reward subtextual readings more than others. Still, the advantage of making a film with such a lean universe design is its semiotic flexibility.

John Wick’s real name, we learn, is Jardani Jovonovich, a Belarussian gypsy raised as an assassin. Wick is nicknamed Baba Yaga, the Boogeyman, for he is the master of assassination. Who are most gifted in using social media to sow chaos and division in the world, especially the United States, than the Russians? Having lost the Cold War they’ve come back in a more fluid and confounding form.

When the first film begins, Wick has left that world of violence behind for a peaceful domestic life with his wife Helen. But she dies from an illness, though not before leaving him a beagle to keep him company. The dog, along with Wick’s car, a 1969 Ford Mustang Mach 1, are recognizable to anyone as the two iconic totems of an American’s most sacred values.

When a group of Russian gangsters try to buy his car and Wick refuses, they break into his home, steal the car, and kill the dog. In Pulp Fiction, John Travolta complains to Eric Stoltz that some vandals keyed his car. Stoltz commiserates.

“They should be fuckin’ killed, man. No trial, no jury, straight to execution,” he says.

“What’s more chicken-shit than fuckin’ with a man’s automobile?” says Travolta. “Don’t fuck with another man’s vehicle.”

“You don’t do it,” agrees Stoltz.


In America, the car is the symbol of a man’s property and an expression of his individual freedom. The dog is the symbol of unconditional loyalty, man’s faithful companion as he rules over his domain.

The two totems of American sacred values

In a social media context, we can think of Wick’s dog and his car as representing those beliefs we hold sacred. When Wick loses his car and his dog, he is every one of us who sees one of the values we consider intrinsic to our personal identity impugned by some stranger on social media. That the perpetrators are Russian is nothing if not reminiscent of Russian agents sowing discord in American society in the run up to the 2016 Presidential election.

It turns out John used to work for the father of the leader of the gangsters who stole his car and killed his dog. That father, Viggo, upon learning what his son Iosef has done, calls Wick and begs him to let it go. Don’t feed the trolls, we are told time and again. But we, like Wick, cannot. His permanent sabbatical from assassination has come to an end.

As on social media, violence begets violence. Since Wick refuses to let the matter go, Viggo, to protect his son, sends a preemptive hit squad to assassinate Wick at his home. We never fight a single target on social media because the public broadcast nature of social media always rallies others to the cause. The first John Wick film proceeds from there as a series of attacks and counterattacks until Wick emerges, alive, bloodied, with a new dog, a pit bull he frees from an animal clinic. Viggo, Iosef, and what seems like a hundred or so henchmen are dead. The new dog symbolizes a brief moment of peace for Wick, just as we sometimes emerge from our skirmishes online feeling as if we have the moral high ground, our honor once again intact.

John Wick 2 begins with him retrieving his car from a chop shop owned by Viggo's brother, which requires Wick to kill not only Viggo’s brother but his fellow goons. The car takes serious damage in the firefight, much like the beating we take defending ourselves online, but Wick eventually emerges with his car and new dog and then returns home to bury his weapons cache. He thinks he is out of the game once again.

As anyone who has participated in culture wars knows, any victory is temporary and pyrrhic.

Out of the blue, Santino D’Antonio visits Wick at his home and calls in a marker, represented in the films by a medallion with a drop of blood from the debtor. Santino needs Wick to become an assassin again, just as various friends online call on us to take their side in various online battles.

John refuses. He wants out. The marker is the marker, though. If you won’t defend your values, then can you say you really have any? Santino reminds John of this in a not-so-subtle way: he blows up Wick’s house with a grenade launcher.

This brings us to The Continental, the unique hotel chain at the heart of the John Wick universe. Their Manhattan branch is run by Winston (Ian McShane) and staffed by the always courteous and professional concierge Charon (Lance Reddick). Now homeless, Wick retreats to the Continental for refuge. The entire Continental hotel chain lives under the aegis of the High Table, like one of the W Hotels in the former Starwood and now Marriott network.

The Continental hotel chain stands in for our social media platforms. Like them, The Continental claims neutrality—no killing is allowed on Continental grounds—yet they happily arm assassins with all manners of weapons, like Twitter arming people with the quote tweet, the AK-47 of social media. They even employ a weapons sommelier.

The Continental sets all sorts of very specific policies that seem to be in conflict with each other; do they want civility or violence? Visitors to the Continental, like Wick, vacillate between wanting them to enforce rules and wondering who put them in charge in the first place. In other words, a mirror of the tension between users and the social networks that dominate the modern internet.

At any rate, Winston reminds Wick he must honor the marker from Santino, because them’s the rules. These markers are like metaphors for engagement, the debt we pay social networks for the privilege of their services and distribution. Social media platforms do not want violence on their grounds, yet they live through user engagement. The only way to not have any markers on your ledger is to never accrue a debt in the first place, but Wick was raised in the golden age of social networks, where it was near impossible to avoid being active on them. Bowing to the marker, Wick accedes to Santino’s request to assassinate his sister so Santino can assume her spot on the High Table council.

Wick carries out the mission, with great reluctance, only to have Santino turn around and put a $7 million contract on Wick for murdering his sister. This is akin to battling your enemies on social media platforms, creating the engagement that platforms thrive off of, only to have them turn around and lock your account for having done so. Many a person I know has complained about just such a betrayal. Pour one out for David Simon and his periodic bans on Twitter for eviscerating his opponents in a blaze of profanity.

Wick, as is his style, comes after Santino, who retreats to the safety of the Continental, where no violence is allowed. But Wick has been betrayed, and personal values now take precedence over the platform rules of The Continental. He pursues Santino onto hotel grounds and guns him down in front of Winston.

As penalty for conducting assassin business on Continental grounds, the High Table doubles the bounty on Wick to $14 million and broadcasts it globally. As the second film ends, Winston informs Wick of the bounty and gives him an hour head start to run. He sets off with his pit bull through Central Park as cell phones start ringing throughout the park. Wick has been true to his beliefs, as symbolized by the dog by his side, but the outrage mob is about to be set loose on him.

John Wick 3: Parabellum picks up from there. Wick is on the run through the rain of Manhattan, glancing at his watch as the seconds tick down to the global bounty becoming official.

In the Wick universe, official High Table business is processed through a central office by dozens of men and women dressed like old school phone switch operators, all of whom go about their jobs with an almost cheerful professionalism. Anyone who has ever received an impassive automatic reply from a social media customer service department after reporting some vicious attack can empathize with the almost comical formality of the Kafkaesque institution in the face of what feels like emotional terrorism.

That the bounty is put out by the High Table feels appropriate. It’s because of the algorithmic distribution of social media platforms that the asymmetric attack of the bloodthirsty mob achieves modern levels of scale and precision. The High Table seems elusive, at times arbitrary, just like the moderation policies of social networks. Winston at time seems friendly to John, yet he also stands by as the mob prepares to set upon Wick. Many users of Facebook, Twitter, Instagram, Reddit, and so on can relate to this love-hate relationship with those platforms.

As soon as Wick’s bounty goes global, seemingly every next person on the street comes sets upon him with the nearest weapon at hand. Anyone who has been attacked by an online mob, or even mildly harassed, is familiar with this uniquely modern sensation of being set upon by complete strangers. The Wick films give online mobs physical form. These random assassins are the Twitter eggs with usernames like pepe298174.

Even more perfect, strangers attack John Wick only after glancing at their phones and receiving word of the bounty. How do outrage mobs coalesce in the online world? From people staring at social media on their phones and locating the next target to be cancelled. The High Table’s bounty system, with its mobile notifications, is nothing less than a formalization of the mechanisms by which social networks enable cancel culture.

Wick dispatches one attacker after the other with every weapon at hand, whether axe or handgun or, in the first case, a hardcover book (when you absolutely, positively, have to snap a man’s neck using a book lodged in his jaw, a flimsy paperback or e-book just will not do).

I’ve talked to liberals who’ve been set upon by the alt-right. Women who’ve been attacked by gamers. Creatives who are set upon by outraged fans. Conservatives who feel swarmed by SJW’s. Everyone feels unjustly attacked by faceless mobs, everyone is aggrieved. Everyone feels they are standing up for their truth and their principles, like John Wick, while mindless strangers attack from all sides. John Wick is the avatar of the modern social media user, the "righteous man beset on all sides by the inequities of the selfish and the tyranny of evil men."

Just before the bounty goes live, Wick stops by one of those doctors in the movies that caters to assassins and mobsters, the ones with fantastic service, always willing to provide bullet removal surgery on demand to walk-ins. Wick is bleeding from a shoulder wound inflicted by an overzealous assassin who tried to take John out before the bounty went official. Wick begs the doctor to patch him up, and he does, even pointing John to some medicine for the pain. But before Wick leaves, the doctor asks John to shoot him twice, to make it seem as if Wick coerced him into helping him. The doctor knows it is near impossible to stay neutral in the culture wars; if you’re not on one side you’re on the other. Ask Maggie Haberman.

John calls in a marker from a woman known as the Director (Anjelica Huston). She runs a ballet theater called the Ruska Roma that doubles as some sort of training ground for assassins; it’s implied that Wick learned his trade there. Once again, the blind loyalty to this marker system perpetuates a cycle of violence. Huston would rather not be involved, admonishing Wick, “You honor me by bringing death to my front door.”

Wick retorts in Russian, “I am a child of the Belarus. An orphan of your tribe. You are bound to help me.” He explicitly evokes the tribalism inherent in humans, the us vs. them impulse that social media amplifies. And then, in English, “You are bound, and I am owed.” The particular power of tribalism is the near impossibility of being neutral; to not pick any side is to be against everyone. The Director succumbs.

The face you make when your friend tags you into his or her online battle and you just want to watch YouTube

You were at my wedding Denise

As she walks him through the backstage training area of the theater, where other young assassins are in training, she says, “You know when my pupils first come here, they wish for one thing. A life free of suffering. I try to dissuade them from these childish notions but as you know, art is pain. Life is suffering.” As she says this, a ballerina pulls a toenail off. Social media is suffering, she is saying, but Wick is already in too deep.

She walks him past a bunch of men wrestling on the ground, future John Wicks in training.

She continues, “Somehow, you managed to get out. But here you are, back where you began. All of this, for what? For a dog?”

“It wasn’t just a dog,” he replies.

“The High Table wants your life. How can you fight the wind? How can you smash the mountains? How can you bury the ocean? How can you escape from the light? Of course you can go to the dark. But they’re in the dark, too.”

Huston is saying that the only way to avoid the darkness of social media is to avoid it, but, as he says, it wasn’t just a dog. She points him to the path out of the outrage cycle, nothing that it’s not a game you can win (How can you fight the wind? That is, there’s always another faceless troll.), but for Wick it’s a matter of honor.

She cashes in his marker, acceding to his request for safe passage to Casablanca.

Enter Taylor Mason. Err, sorry, the Adjudicator, played by Asia Kate Dillon. Employed by the High Table, she informs both Winston of the NY Continental and another character nicknamed the Bowery King (Laurence Fishburne) that they must abdicate their positions in seven days for having aided Wick in killing Santino (in John Wick 2).

If you’re a liberal, the Adjudicator is like the conservative government officials who’ve continually accused social media platforms of an anti-conservative bias, or the both-sides-ism of the media. If you’re a conservative, the Adjudicator is some metaphor for the liberal media, punishing social media platforms for anything other than absolute conformity to liberal narratives. Sometimes, when Twitter works itself into a rage at another NYTimes headline that isn’t tough enough on Trump, I think of the Adjudicator as the public, holding the newspaper to account for its failure to answer to the collective public High Table.

In Casablanca, Wick calls on another friend, Sofia (the ageless Halle Berry making a nice pair with the ageless Keanu), with whom he cashes in yet another marker. She, like The Director earlier, is not happy to be pulled into Wick’s personal battles. Sofia runs another branch of the Continental, so essentially Wick has fled one tech platform for another that feels obligated to shelter him. He may be excommunicado from the NY Continental, but he once came to Sofia’s aid, and she owes him.

“You do realize that I’m management now, right? I’m not service anymore, John, so I don’t go around shooting people in the head,” Sofia notes. She’s essentially a tech platform executive now, trying to avoid getting pulled into social media battles.

“Look, I made a deal when I agreed to run this hotel, and that deal said I had to follow the rules of the High Table,” she says. “If I make one mistake, one enemy, maybe somebody goes looking for my daughter.”

Sofia faces the risk of being doxxed and having some nutjobs go after her children. Years ago, John helped get Sofia’s daughter out of this dangerous world, and Sofia doesn’t know where she’s been shepherded. She doesn’t want to know because she knows it would put her daughter back in harm’s way.

“Because sometimes you have to kill what you love.” Sofia speaks for all those who keep their opinions to themselves online because the cost of being cancelled just isn’t worth the cost of being attacked by the mob. If she stays in the game, she will be pulled into vicious battles she wishes no part of. But in removing herself from social media, she loses out on some of the benefits they offer, like the chance to communicate with family and friends, in her case her daughter. Long ago she chose exit.

Meanwhile, in Manhattan, the Adjudicator visits a sushi stand and calls on the chef and his crew to help enforce penalties against Wick and all who aided him. The chef, named Zero, agrees. He is, like seemingly everyone in this world, an assassin, just as social media turned all of us into soldiers in the culture wars. Zero and his team seem willing to serve the High Table no matter what they demand; like most people, the lure of participating in an online mob is a form of universal human bloodlust. They can also stand in for platform moderators, trying to implement social network speech policies as best as they can.

First they visit the Director at the Ruska Roma. The Adjudicator confronts her over helping Wick despite his excommunication.

Huston defends herself. “He had a ticket.”

The Adjudicator will hear nothing of it. “But a ticket does not stand above the Table.”

Zero runs a blade through the Director's clasped hands as penalty.

Time and again, the John Wick mythology points to the seeming futility of the defending one’s values on social media. The price of picking a side is always to suffer egregious violence from the other side with seemingly no real winners, or to be have one's hands slapped by the platforms (or in this case, pierced with a sword).

Sofia takes Wick to meet her former boss Berrada, as he requests. Berrada runs a mint to manufacture the gold coins and markers that the assassin world operate on.

“Now this coin, of course, it does not represent monetary value. It represents the commerce of relationships, a social contract in which you agree to partake. Order and rules. You have broken the rules. The High Table has marked you for death.” Berrada describes both the way in which platforms turned our relationships into business arrangements (“commerce” and “contract”), the artificiality of their power—the order and rules are ones the platforms made up—and their power to deplatform or ban anyone who sign the user agreements.

Berrada asks Wick if he knows the etymology of the word assassin.

Berrada explains: “But others contend it comes from asasiyyun. Meaning ‘men who are faithful and who abide by their beliefs.’” The Wick Universe, populated with assassins murdering each other in an endless cycle of retribution, is a proxy for the users on social media who cannot stand by idly while others infringe upon their beliefs.

Wick asks Berrada how to find the Elder, the one who sits above the High Table. Berrada directs him to wander into the desert and hope that the Elder finds him.

Before Sofia and John can leave, however, Berrada demands something from Sofia in exchange for the favor. In face, he says he will keep one of Sofia’s two dogs, who accompany her everywhere. Again, the dog symbolizes a person’s most sacred values. On social media, we are always being forced by tribal battles to give up some of our values in order to stay out of harm’s way. This time, Sofia refuses.

Berrada shoots one of the dogs, but it is wearing a bulletproof vest (hey yo social media wars are vicious you can never be too cautious). Sofia huddles over her dog, then draws a handgun hidden under its vest.

John sees what she is doing and urges her, “No.”

But it’s too late. The thing about social media is that it takes just one savage troll to put us on tilt. Sofia shoots Berrada in the leg, and just like that she’s back in the culture wars.

After she and her dogs and John kill off Berrada’s nearby henchmen, she walks over to Berrada and considers shooting him in the head.

“Sofia, don’t,” urges John.

She shoots him in the knee instead. “He shot my dog.”

“I get it,” he replies, in the funniest line in the film. Anyone who has dealt with an online mob empathizes with friends when they fall under attack and go berserk in response.

When you know you should just mute and block and walk away, but damn, that SOB shot your dog

Sofia, John, and the dogs fight their way out of the facility, killing several dozen men along the way in the most elaborately violent ways possible, evoking the almost casual cruelty of online warfare. They steal a car and drive out to the desert where Sofia abandons John to his search for the Elder. He wanders through the desert in his suit, without any water, a user de-platformed.

Damn, I got booted off Twitter and Facebook

In Manhattan, the Adjudicator and her sushi chef moderators visit the Bowery King and make him pay penance for the seven bullets he gave John Wick with seven knife cuts to the chest.

In the desert, John collapses from exhaustion but is saved and brought to the Elder. John asks him for a chance to reverse his excommunication. The Elder offers him a deal: Wick must assassinate Winston, head of the Manhattan Continental hotel, and then serve the rest of his days under the High Table doing what he does best, assassinating people.

This is the Faustian bargain for being on these social media platforms. Drive engagement for them and play by their rules, whatever those are, or be excommunicated from them. John either stays an assassin, suffering a lifetime of fighting other people on social media, or he can remove himself from the platforms entirely.

“I will serve. I will be of service,” John says. To prove his fealty, he cuts off his wedding ring finger. We’ve all seen people lash back at trolls only to be banned themselves. The loss of Wick’s ring finger represents those values we compromise when playing by social media platform’s arbitrary moderation rules. Who among us hasn’t emerged from some online tussle feeling like we lost a finger ourselves, gave up some part of our humanity?

Oh boy, here come’s dat online mob!

Back in Manhattan, John has to fight his way past Zero and his henchmen to reach the Continental. Just as Zero is about to kill him, John puts his hand on the front steps of the Continental. Charon appears and tells Zero to lower his weapon. Again, the platform rules are the rules: no assassination on hotel grounds.

Inside, John and Zero sit in the lobby together and have a chat. Zero fanboys over having met the legendary John Wick, even while noting he’s more of a cat person. Nothing epitomizes the often arbitrary tribal battles online better than the fight between cat and dog people.

You like dogs? I guess we have to kill each other.

Many people have described the feeling of meeting someone in real life who they despise online and finding they get along better than they would’ve imagined. While it’s not always the case, the disembodied world of social media tends to amplify divisions. The John Wick films portray this multiple times; in every film, John has a moment where he and someone trying to assassinate him stop to share a cordial drink on Continental grounds before resuming their fight to the death a short while later.

If only we’d met offline rather than on Twitter, we might be friends!

Isn’t screaming at each other online productive?

Wick gets his meeting with Winston, who tells John that killing him will not honor his wife’s memory but simply return him to a state of subservience to the High Table. The Adjudicator joins them and asks if Winston will step down (reminiscent of the calls for CEOs like Zuckerberg and Dorsey to step down from their posts) and whether John will kill Winston. Both of them refuse, so the Adjudicator calls the home office and has the Manhattan branch of the Continental deconsecrated.

Blame me all you want for running this platform, but it’s just human nature John. I can’t fix that!

Of course, this now means that assassination can be carried out on hotel grounds, but also that John can now partake in hotel services, namely a visit to the gun sommelier.

“Let’s see, I’m going to need the ability to tag some mofos, and also to quote tweet their asses”

“Let’s see, I’m going to need the ability to tag some mofos, and also to quote tweet their asses”

What ensues is what film critics love to refer to as an “orgy of violence,” (has there every been an “orgy of peace”?) though in this case, as the carnage is accompanied by Vivaldi’s Four Seasons, perhaps a symphony of violence is more fitting (again, why never a “concerto of violence”?). Charon, hotel staff, and John move about the hotel fighting off an army of High Table forces clad in such heavy armor that they seem impervious to bullets, almost like an army of online bots swarming their target.

The whole time, Winston hides in a secure vault, sipping a martini, emblematic, in many people's minds, of social media execs working from their cushy offices while users rip each other to shreds on their platforms.

Wow, Trump just declared war on Twitter!

Oh well!

Oh well!

John survives, as usual, dispatching everyone who comes after him. The Adjudicator calls Winston and asks for a parley on the rooftop of the Continental, where John eventually arrives. Winston asks the Adjudicator for forgiveness and offers his ongoing loyalty to the High Table. The Adjudicator agrees to reconsecrate the Continental and restore Winston as manager, but then she turns to John and asks Winston what is to be done of the titular assassin. Winston replies by shooting Wick repeatedly in the chest and knocking him off the roof of the Continental, where he falls several stories to the alley below, bouncing off a few fire escape railings and awnings in the process. Ah, those platforms, they're always liable to turn on you.

Wick is not dead, as you’d expect. The Adjudicator, on the way out of the hotel, peeks in the alley, where Wick’s body is nowhere to be found. He has, we discover, been brought to the Bowery King, now maimed by all those knife wounds ordered by the Adjudicator.

What outlook does John Wick offer us on the state of the online discourse moving forward? Is there any hope for relief? The end of the film isn’t optimistic.

Laurence Fishburne says to Wick, lying there in a bloody heap on the ground: “So, let me ask you John, how do you feel? Because I am really pissed off. You pissed, John? Hmm? Are you?”

John Wick strains to lift his bloodied head off the ground to look Fishburne in the eyes. “Yeah.”

Invisible asymptotes

"It is said that if you know your enemies and know yourself, you will not be imperiled in a hundred battles; if you do not know your enemies but do know yourself, you will win one and lose one; if you do not know your enemies nor yourself, you will be imperiled in every single battle." - Sun Tzu

My first job at Amazon was as the first analyst in strategic planning, the forward-looking counterpart to accounting, which records what already happened. We maintained several time horizons for our forward forecasts, from granular monthly forecasts to quarterly and annual forecasts to even five and ten year forecasts for the purposes of fund-raising and, well, strategic planning.

One of the most difficult things to forecast was our adoption rate. We were a public company, though, and while Jeff would say, publicly, that "in the short run, the stock market is a voting machine, in the long run, it's a scale," that doesn't provide any air cover for strategic planning. It's your job to know what's going to happen in the future as best as possible, and every CFO of a public company will tell you that they take the forward guidance portion of their job seriously. Because of information asymmetry, analysts who cover your company depend quite a bit on guidance on quarterly earnings calls to shape their forecasts and coverage for their clients. It's not just that giving the wrong guidance might lead to a correction in your stock price but that it might indicate that you really have no idea where your business is headed, a far more damaging long-run reveal.

It didn't take long for me to see that our visibility out a few months, quarters, and even a year was really accurate (and precise!). What was more of a puzzle, though, was the long-term outlook. Every successful business goes through the famous S-curve, and most companies, and their investors, spend a lot of time looking for that inflection point towards hockey-stick growth. But just as important, and perhaps less well studied, is that unhappy point later in the S-curve, when you hit a shoulder and experience a flattening of growth.

One of the huge advantages for us at Amazon was that we always had a fairly good proxy for our total addressable market (TAM). It was easy to pull the statistics for the size of the global book market. Just as a rule of thumb, one could say that if we took 10% of the global book market it would mean our annual revenues would be X. One could be really optimistic and say that we might even expand the TAM, but finance tends to be the conservative group in the company by nature (only the paranoid survive and all that).

When I joined Amazon I was thrown almost immediately into working with a bunch of MBA's on business plans for music, video, packaged software, magazines, and international. I came to think of our long-term TAM as a straightforward layer cake of different retail markets.

Still, the gradient of adoption was somewhat of a mystery. I could, in my model, understand that one side of it was just exposure. That is, we could not obtain customers until they'd heard of us, and I could segment all of those paths of exposure into fairly reliable buckets: referrals from affiliate sites (we called them Associates), referrals from portals (AOL, Excite, Yahoo, etc.), and word-of-mouth (this was pre-social networking but post-email so the velocity of word-of-mouth was slower than it is today). Awareness is also readily trackable through any number of well-tested market research methodologies.

Still, for every customer who heard of Amazon, how could I forecast whether they'd make a purchase or not? Why would some people use the service while others decided to pass?

For so many startups and even larger tech incumbents, the point at which they hit the shoulder in the S-curve is a mystery, and I suspect the failure to see it occurs much earlier. The good thing is that identifying the enemy sooner allows you to address it. We focus so much on product-market fit, but once companies have achieved some semblance of it, most should spend much more time on the problem of product-market unfit.

For me, in strategic planning, the question in building my forecast was to flush out what I call the invisible asymptote: a ceiling that our growth curve would bump its head against if we continued down our current path. It's an important concept to understand for many people in a company, whether a CEO, a product person, or, as I was back then, a planner in finance.

Amazon's invisible asymptote

Fortunately for Amazon, and perhaps critical to much of its growth over the years, perhaps the single most important asymptote was one we identified very early on. Where our growth would flatten if we did not change our path was, in large part, due to this single factor.

We had two ways we were able to flush out this enemy. For people who did shop with us, we had, for some time, a pop-up survey that would appear right after you'd placed your order, at the end of the shopping cart process. It was a single question, asking why you didn't purchase more often from Amazon. For people who'd never shopped with Amazon, we had a third party firm conduct a market research survey where we'd ask those people why they did not shop from Amazon.

Both converged, without any ambiguity, on one factor. You don't even need to rewind to that time to remember what that factor is because I suspect it's the same asymptote governing e-commerce and many other related businesses today.

Shipping fees.

People hate paying for shipping. They despise it. It may sound banal, even self-evident, but understanding that was, I'm convinced, so critical to much of how we unlocked growth at Amazon over the years.

People don't just hate paying for shipping, they hate it to literally an irrational degree. We know this because our first attempt to address this was to show, in the shopping cart and checkout process, that even after paying shipping, customers were saving money over driving to their local bookstore to buy a book because, at the time, most Amazon customers did not have to pay sales tax. That wasn't even factoring in the cost of getting to the store, the depreciation costs on the car, and the value of their time.

People didn't care about this rational math. People, in general, are terrible at valuing their time, perhaps because for most people monetary compensation for one's time is so detached from the event of spending one's time. Most time we spend isn't like deliberate practice, with immediate feedback.

Wealthy people tend to receive a much more direct and immediate payoff for their time which is why they tend to be better about valuing it. This is why the first thing that most ultra-wealthy people I know do upon becoming ultra-wealthy is to hire a driver and start to fly private. For most normal people, the opportunity cost of their time is far more difficult to ascertain moment to moment.

You can't imagine what a relief it is to have a single overarching obstacle to focus on as a product person. It's the same for anyone trying to solve a problem. Half the comfort of diets that promise huge weight loss in exchange for cutting out sugar or carbs or whatever is feeling like there's a really simple solution or answer to a hitherto intractable, multi-dimensional problem.

Solving people's distaste for paying shipping fees became a multi-year effort at Amazon. Our next crack at this was Super Saver Shipping: if you placed an order of $25 or more of qualified items, which included mostly products in stock at Amazon, you'd receive free standard shipping.

The problem with this program, of course, was that it caused customers to reduce their order frequency, waiting until their orders qualified for the free shipping. In select cases, forcing customers to minimize consumption of your product-service is the right long-term strategy, but this wasn't one of those.

That brings us to Amazon Prime. This is a good time to point out that shipping physical goods isn't free. Again, self-evident, but it meant that modeling Amazon Prime could lead to widely diverging financial outcomes depending on what you thought it would do to the demand curve and average order composition.

To his credit, Jeff decided to forego testing and just go for it. It's not so uncommon in technology to focus on growth to the exclusion of all other things and then solve for monetization in the long run, but it's easier to do so for a social network than a retail business with real unit economics. The more you sell, the more you lose is not and has never been a sustainable business model (people confuse this for Amazon's business model all the time, and still do, which ¯\_(ツ)_/¯).

The rest, of course, is history. Or at least near-term history. It turns out that you can have people pre-pay for shipping through a program like Prime and they're incredibly happy to make the trade. And yes, on some orders, and for some customers, the financial trade may be a lossy one for the business, but on net, the dramatic shift in the demand curve is stunning and game-changing.

And, as Jeff always noted, you can make micro-adjustments in the long run to tweak the profit leaks. For some really large, heavy items, you can tack on shipping surcharges or just remove them from qualifying for Prime. These days, some items in Amazon are marked as "Add-on items" and you can only order them in conjunction with enough other items such that they can be shipped with those items rather than in isolation.

[Jeff counseled the same "fix it later" strategy in the early days when we didn't have good returns tracking. For a window of time in the early days of Amazon, if you shipped us a box of books for returns, we couldn't easily tell if you'd purchase them at Amazon and so we'd credit you for them, no questions asked. One woman took advantage of this loophole and shipped us boxes and boxes of books. Given our limited software resources, Jeff said to just ignore the lady and build a way to solve for that later. It was really painful, though, so eventually customer service representatives all shared, amongst themselves, the woman's name so they could look out for it in return requests even before such systems were built. Like a mugshot pinned to every monitor saying "Beware this customer." A tip of the hat to you, maam, wherever you are, for your enterprising spirit in exploiting that loophole!]

Prime is a type of scale moat for Amazon because it isn't easy for other retailers to match from a sheer economic and logistical standpoint. As noted before, shipping isn't actually free when you have to deliver physical goods. The really challenging unit economics of delivery businesses like Postmates, when paired with people's aversion for paying for shipping, makes for tough sledding, at least until the cost of delivering such goods can be lowered drastically, perhaps by self-driving cars or drones or some such technology shift.

Furthermore, very few customers shop enough with retailers other than Amazon to make a pre-pay program like Prime worthwhile to them. Even if they did, it's very likely Amazon's economies of scale in shipping and deep knowledge of how to distribute their inventory optimally means their unit economics on delivery are likely superior.

The net of it is that long before Amazon hit what would've been an invisible asymptote on its e-commerce growth it had already erased it.

Know thine enemy.

Invisible asymptotes are...invisible

An obvious problem for many companies, however, is that they are creating new types of businesses and services that don't lend themselves to easily identifying such invisible asymptotes. Many are not like Amazon where there are readily tracked metrics like the size of the global book market with which to peg their TAM.

Take social networks, for example. What's the shoulder of the curve for something like Facebook? Twitter? Instagram? Snapchat?

Some of the limits to their growth are easier to spot than others. For messaging and some more general social networking apps, for example, in many cases network effects are geographical. Since these apps build on top of real-world social graphs, and many of those are geographically clustered, there are winner-take-all dynamics such that in many countries one messaging app dominates, like Kakao in Korea or Line in Taiwan. There can be geo-political considerations, too, that help ensure that that WeChat will dominate in China to the exclusion of all competitors, for example.

For others, though, it takes a bit more product insight, and some might say intuition, to see the ceiling before you bump into it. For both employees and investors, understanding product-market unfit follows very closely on identifying product-market fit as an existential challenge.

Without direct access to internal metrics and research, it's difficult to use much other than public information and my own product intuition to analyze potential asymptotes for many companies, but let's take a quick survey of several more prominent companies and consider some of their critical asymptotes (these companies are large enough that they likely have many, but I'll focus on the macro). You can apply this to startups, too, but there are some differences between achieving initial product market fit and avoiding the shoulder in the S-curve after already having found it.

Twitter

Let's start with Twitter, for many in tech the most frustrating product from the perspective of the gap between the actual and the potential. Its user growth has been flat for quite some time, and so it can be said to have already run full speed into an invisible asymptote. In quarterly earnings calls, it's apparent management often have no idea if or when or how that might shift because their guidance is often a collective shrug.

One popular early school of thought on Twitter, a common pattern with most social networks, is that more users need to experience what the power users or early adopters are experiencing and they'll turn into active users. Many a story of social networks who've continued to grow point to certain keystone metrics as pivotal to unlocking product-market fit. For example, once you've friended 30 people on Facebook, you're hooked. For Twitter, an equivalent may be following enough people to generate an interesting feed.

Pattern-matching moves more quickly through Silicon Valley than almost any other place I've lived, so stories like that are passed around through employees and Board meetings and other places where the rich and famous tech elite hobnob, and so it's not surprising that this theory is raised for every social network that hits the shoulder in their S-curve.

There's more than a whiff of Geoffrey Moore's Crossing the Chasm in this idea, some sense that moving from early adopters to the mainstream involves convincing more users to use the same product/service as early adopters do.

In the case of Twitter, I think the theory is wrong. Given the current configuration of the product, I don't think any more meaningful user growth is possible, and tweaking the product as it is now won't unlock any more growth. The longer they don't acknowledge this, the longer they'll be stuck in a Red Queen loop of their own making.

Sometimes, the product-market fit with early adopters is only that. The product won't go mainstream because other people don't want or need that product. In these cases, the key to unlocking growth is usually customer segmentation, creating different products for different users.

Mistaking one type of business for the other can be a deadly mistake because the strategies for addressing them are so different. A common symptom of this mistake is not seeing the shoulder in the S-curve coming at all, not understanding the nature of your product-market unfit.

I believe the core experience of Twitter has reached most everyone in the world who likes it. Let's examine the core attributes of Twitter the product (which I treat as distinct from Twitter the service, the public messaging protocol).

It is heavily text-based, with 140 and now 280 character limit snippets of text from people you've followed presented in a vertical scrolling feed in some algorithmic order, which, for the purposes of this exercise, I'll just consider roughly chronological.

For fans, most of whom are infovores, the nature of product-market fit is, as with many of our tech products today, one of addiction. Because the chunks of text are short, if one tweet is of no interest, you can quickly scan and scroll to another with little effort. Discovering tweets of interest in what appears to be a largely random order rewards the user with dopamine hits on that time-tested Skinner box variable frequency. Instead of rats hitting levers for pellets of food, power Twitter user push or pull on their phone screens for the next tasty pellet of text.

For infovores, text, in contrast to photos or videos or music, is the medium of choice from a velocity standpoint. There is deep satisfaction in quickly decoding the textual information, the scan rate is self-governed on the part of the reader, unlike other mediums which unfold at their own pace (this is especially the case with video, which infovores hate for its low scannability).

Over time, this loop tightens and accelerates through the interaction of all the users on Twitter. Likes and retweets and other forms of feedback guide people composing tweets to create more of the type that receive positive feedback. The ideal tweet (which I mean one that will receive maximum positive feedback) combines some number of the following attributes:

  • Is pithy. Sounds like a fortune cookie. The character limit encourages this type of compression.

  • Is slightly surprising. This can be a contrarian idea or just a cliche encoded in a semi-novel way.

  • Rewards some set of readers' priors, injecting a pleasing dose of confirmation bias directly into the bloodstream.

  • Blasts someone that some set of people dislike intensely. This is closely related to the previous point.

  • Is composed by someone famous, preferably someone a lot of people like but don't consider to be a full-time Tweeter, like Chrissy Teigen or Kanye West.

  • Is on a topic that most people think they understand or on which they have an opinion.

Of course, the set of ideal qualities varies by subgroup on Twitter. Black Twitter differs from rationalist Twitter which differs from NBA Twitter. The meta point is that the flywheel spins more and more quickly over time within each group.

The problem is that for those who don't use Twitter, almost all of its ideal attributes among the early adopter cohort are those which other people find bewildering and unattractive. Many people find the text-heavy nature of Twitter to be a turn-off. The majority of people, actually.

The naturally random sort order of ideas that comes from the structure of Twitter, one which pings the pleasure centers of the current heavy user cohort when they find an interesting tweet, is utterly perplexing to those who don't get the service. Why should they hunt and peck for something of interest? Why are conversations so difficult to follow (actually, this is a challenge even for those who enjoy Twitter)? Why do people have to work so hard to parse the context of tweets?

Falling into the trap of thinking other users will be like you is especially pernicious because the people building the product are usually among that early adopter cohort. The easiest north star for a product person is their own intuition. But if they're working on a product that requires customer segmentation, being in the early adopter cohort means one's instincts will keep guiding you towards the wrong North star and the company will just keep bumping into the invisible asymptote without any idea why.

This points to an important qualifier to the "crossing the chasm" idea of technology diffusion. If the chasm is large enough, the same product can't cross it. Instead, on the other side of the gaping chasm is just a different country altogether, with different constituents with different needs.

I use Twitter a lot (I recently received a notification I'd passed my 11-year anniversary of joining the service) but almost everyone in my family, from my parents to my siblings to my girlfriend to my nieces and nephews has tried and given up on Twitter. It doesn't fulfill any deep-seated need for any of them.

It's not surprising to me that Twitter is populated heavily by journalists and a certain cohort of techies and intellectuals who all, to me, are part of a broader species of infovore. For them, opening Twitter must feel as if they've donned Cerebro and have global contact with thousands of brains all over the world, as if the fabric of their brain had been flattened and stretched out wide and laid on top of that of millions of others brains all over the world.

Quiet, I am reading the tweets.

Mastering Twitter is already something this group of people do all the time in their lives and jobs, only Twitter accelerates it, like a bicycle for intellectual conversation and preening. Twitter, at its best, can provide a feeling of near real-time communal knowledge sharing that satisfies some of the same needs as something like SoulCycle or Peloton. A feeling of communion that also feels like it's productive.

If my instincts are right, then all the iterating around the margins on Twitter won't do much of anything to change the growth curve of the service. It might improve the experience for the current cohort of users and increase usage (for example, curbing abuse and trolls is an immediate and obvious win for those who experience all sorts of terrible harassment on the service), but it doesn't change the fact that this core Twitter product isn't for all the people who left the club long ago, soon after they walked in and realized it was just a bunch of nerds who'd ordered La Croix bottle service and were sitting around talking about Bitcoin and stoicism and transcendental meditation.

The good news is that the Twitter service, that public messaging protocol with a one-way follow model, could be the basis for lots of products that might appeal to other people in the world. Knowing the enemy can prevent wasting time chasing the wrong strategy.

Unfortunately, one of the main paths towards coming up with new products built on top of that protocol was the third party developer program, and, well, Twitter has treated its third party developers like unwanted stepchildren for a long time. For whatever reason, it's difficult to speculate without having been there, Twitter's rate of product development internally has been glacial. A vibrant third party-developer program could have helped by massively increasing the vectors of development on Twitter's very elegant public messaging protocol and datasets.

[Note, however, that I'm sympathetic to tech companies that restrict building clones of their service using their API's. No company owes it to others to allow people to build direct competitors to their own product. Most people don't remember, but Amazon's first web services offering was for affiliates to build sites to sell things. Some sites started building massive Amazon clones and so Amazon's web services evolved into other forms, eventually settling on what most people know it as today.]

In addition, I've long wondered if the shutting out of third party developers on Twitter was an attempt to aggregate and own all their own ad inventory. Both these problems could've been solved by tweaking the Twitter third party development program. Developers could be offered two paths.

One option is that for every X number of tweets a developer pulled, they'd have to carry and display a Twitter-inserted ad unit. This would make it possible for Twitter to support third-party clients like Tweetbot that compete somewhat with Twitter's own clients. Maybe one of these developers would come up with improvements on top of Twitter's own client apps, but in doing so they'd increase Twitter's ad inventory.

The second option would be to pay some fixed fee for every X tweets pulled. That would force the developer to come up with some monetization scheme on their own to cover their usage, but at least the option would exist. I don't doubt that some enterprising developers might come up with some way to monetize a particular use case, for example for business research.

Twitter the product/app has hit its invisible asymptote. Twitter the protocol still has untapped potential.

Snapchat

Snapchat is another example of a company that's hit a shoulder in its growth curve. Unlike Twitter, though, I suspect its invisible asymptote is less an issue of its feature set and more one of a generational divide.

That's not to say that making the interface less inscrutable earlier on wouldn't have helped a bit, but I suspect only at the margins. In fact, the opaque nature of the interface probably served Snapchat incredibly well when the product came along, regardless of whether or not it was intended that way. Snapchat came along at a moment when kids' parents were joining Facebook, and when Facebook had been around long enough for the paper trail of its early, younger users to come back and bite some of them.

Along comes a service that not only wipes out content by default after a short period of time but is inscrutable to the very parents who might crash the party. In fact, there's an entire class of products for which I believe an Easter Egg-like interface is actually preferable to an elegant, self-describing interface, long seen as the apex of UI design (more on that another day).

I've written before about selfies as a second language. At the root of that phenomenon is the idea that a generation of kids raised with smartphones with a camera front and back have found the most efficient way to communicate is with the camera, not the keyboard. That's not the case for an older cohort of users who almost never send selfies as a first resort. The very default of Snapchat to the camera screen is such a bold choice that it will probably never be the messaging app of choice for old folks, no matter how Snapchat moves around and re-arranges its other panes.

More than that, I suspect every generation needs spaces of its own, places to try on and leave behind identities at low cost and on short, finite time horizons. That applies to social virtual spaces as much as it does to physical spaces.

Look at how old people use Snapchat and you'll see lots of use of Stories. Watch a young person use Snapchat and it's predominantly one-to-one messaging using the camera (yes, I know some of the messages I receive on Snap are the same ones that person is sending to everyone one-to-one, but the hidden nature of that behavior allows me to indulge an egocentric rather than Copernican model of the universe). Now, it's possible for one app to serve multiple audiences that way, but it will either have to compromise all or some of its user experience to do so.

At a deeper level, I think a person's need for ephemeral content varies across one's lifetime. It's of much higher value when one is young, especially in formative years. As one ages, and time's counter starts to run low, one turns nostalgic, and the value of permanent content, especially from long bygone days, increases, serving as beautifully aged relics of another era. One also tends to be more adept at managing one's public image the more time passes, lessening the need for ephemerality.

All this is to say that I don't think making the interface of Snapchat easier to use is going to move it off of the shoulder on its S-curve. That's addressing a different underlying cause than the one that lies behind its invisible asymptote.

The good news for Snapchat is that I don't think Facebook is going to be able to attract the youngsters. I don't care if Facebook copies Snapchat's exact app one for one, it's not going to happen. The bad news for Snapchat is that it probably isn't going to attract the oldies either. The most interesting question is whether Snapchat's cohort stays with it for life, and the next interesting question is who attracts the next generation of kids to get their first smartphones. Will they, like every generation of youth before them, demand a social network of their own? Sometimes I think they will just to claim a namespace that isn't already spoken for. Who wants to be joesmith43213 when you can be joesmith on some new sexy social network?

As a competitor, however, Instagram is more worrisome than Facebook. It came along after Facebook, as Snapchat did, and so it had the opportunity to be a social network that a younger generation could roam as pioneers, mining so much social capital yet to be claimed. It is also largely an audio-visual network which is appealing to a more visually literate generation.

When Messenger incorporated Stories into its app, it felt like a middle-aged couple dressing in cowboy chic and attending Coachella. When Instagram cribbed Stories, though, it addressed a real supply-side content creation issue for the same young'uns who used Snapchat. That is, people were being too precious about what they shared on Instagram, decreasing usage frequency. By adding Stories, they created a mechanism that wouldn't force content into the feed and whose ephemerality encouraged more liberal capture and sharing without the associated guilt.

This is a general pattern among social networks and products in general: to broaden their appeal they tend to broaden their use cases. It's rare to see a product adhere strictly to its early specificity and still avoid hitting a shoulder in their adoption S-curve. Look at Facebook today compared to Facebook in its early days. Look at Amazon's product selection now compared to when it first launched.

It takes internal fortitude for a product team to make such concessions (I would say courage but we need to sprinkle that term around less liberally in tech). The stronger the initial product market fit, the more vociferously your early adopters will protest when you make any changes. Like a band that is accused of selling out, there is an inevitable sense that a certain sharpness of flavor, of choice, has seeped out as more and more people join up and as a service loosens up and accommodates more more use cases.

I remember seeing so many normally level-headed people on Twitter threaten to abandon the service when they announced they were increasing the character limit from 140 to 280. The irony, of course, was that the character-limit increase likely improved the service for its current users while doing nothing to attract people who didn't use the service, even though the move was addressed mostly to heathen.

Back to Snapchat. I wrote a long time ago that the power of social networks lies in their graph. That means many things, and in Snapchat's case it holds a particularly fiendish double bind. That Snapchat is the social network claimed by the young is both a blessing and a curse. Were a bunch of old folks to suddenly flock to Snapchat, it might induce a case of Groucho Marx's, "I don't care to belong to a club that accepts people like me as members."

Facebook

On the dimension of utility, Facebook's network effects continue to be pure and unbounded. The more people that are on Facebook, the more it's useful for certain things for which a global directory is useful. Even though many folks don't use Facebook a lot, it's rare I can't find them on Messenger if I don't have their email address or phone number. The complexity of analyzing Facebook is that it serves different needs in different countries and markets, social network having strong path dependency in their usage patterns. In many countries, Facebook is the internet; it's odd as an American to travel to countries where businesses' only presence online is a Facebook page, so accustomed I am to searching for American businesses on the web or Yelp first.

When it comes to the "social" aspect of social networking, the picture is less clear-cut. Here I'll focus on the U.S. market since it's the one I'm most familiar with. Because Facebook is the largest social network in history, it may be encountering scaling challenges few other entities have ever seen.

The power of a social network lies in its graph, and that is a conundrum in many ways. One is that a massive graph is a blessing until it's a curse. For social creatures like humans who've long lived  in smaller networks and tribes, a graph that conflates everyone you know is intimidating to broadcast to, except for those who have no compulsion about performing no matter the audience size: famous people, marketers, and those monstrous people who share everything about their lives. You know who you are.

This is one of the diseconomies of scale for social networks that Facebook is first to run into because of its unprecedented size. Imagine you're in a room with all your family, friends, coworkers, casual acquaintances, and a lot of people you met just once but felt guilty about rejecting a friend request from. It's hundreds, maybe even thousands of people. What would you say to them? We know people maintain multiple identities for different audiences in their lives. Very few of us have to cultivate an identity for that entire blob of everyone we know. It's a situation one might encounter in the real world only a few times in life, perhaps at one's wedding, and later one's funeral. Online, though? It happens to be the default mode on Facebook's News Feed.

It's no coincidence that public figures, those who have the most practice at having to deal with this problem, are so guarded. As your audience grows larger, the chance that you'll offend someone deeply with something you say approaches 1.

When I scan my Facebook feed, I see fewer and fewer people I know sharing anything at all. Map one's sharing frequency with the size of one's friend list on Facebook and I highly suspect it looks like this:

friend-post-frequency-facebook.png

Again, not everyone is like this, some psychopaths who are comfortable sharing their thoughts no matter the size of the audience, but these people are often annoying, the type who dive right into politics at Thanksgiving before you've even spooned gravy over your turkey. This leads to a form of adverse selection where a few over-sharers take over your News Feed.

[Not everything one shares gets distributed to one's entire friend graph given the algorithmic feed. But you as the one sharing something have no idea who will see it so you have to assume that any and every person in your graph will see it. The chilling effect is the same.]

Another form of diseconomy of scale is behind the flight to Snapchat among the young, as outlined earlier. A sure way to empty a club or a dance floor is to have the parents show up; few things are more traumatic then seeing your Dad pretend-grind on your Mom when "Yeah" by Usher comes on. Having your parents in your graph on Facebook means you have to assume they're listening, and there isn't some way to turn on the radio very loudly or run the water as in a spy movie when you're trying to pass secrets to someone in a room that's bugged. The best you can do is communicate in code to which your parents don't own the decryption key; usually this takes the form of memes. Or you take the communication over to Snapchat.

Another diseconomy of scale is the increasing returns to trolling. Facebook is more immune to this thanks to its bi-directional friending model than, say, Twitter, with its one-way follow model and public messaging framework. On Facebook, those wishing to sow dissension need to be a bit more devious, and as revelations from the last election showed, there are means to a person's heart, to reach them directly or indirectly, through confirmation bias and flattery. The Iago playbook from Othello. On Twitter, there's no need for such scheming, you can just nuke people from your keyboard without their consent.

All of this is to say I suspect many of Facebook's more fruitful vectors for rekindling their value for socializing lie in breaking up the surface area of their service. News Feed is so monolithic a surface as to be subject to all the diseconomies of scale of social networking, even as it makes it such an attractive advertising landscape.

The most obvious path to this is Groups, which can subdivide large graphs into ones more unified in purpose or ideology. Google+ was onto something with Circles, but since they hadn't actually achieved any scale they were solving a problem they didn't have yet.

Instagram

Where is Instagram's invisible asymptote? This is one of the trickier ones to contemplate as it continues to grow without any obvious end in sight.

One of the advantages to Instagram is that it came about when Facebook was broadening its acceptable media types from text to photos and video. Instagram began with just square photos with a simple caption, no links allowed, no resharing.

This had a couple of advantages. One is that it's harder to troll or be insufferable in photos than it is in text. Photos tend to soften the edge of boasts and provocations. More people are more skilled at being hurtful in text than photos. Instagram has tended to be more aggressive than other networks at policing the emotional tenor of its network, especially in contrast to, say Twitter, turning its attention most recently to addressing trolls in the comment sections.

Of course photos are not immune to this phenomenon. The "look at my perfect life" boasting of Instagram is many people's chief complaint about the app and likely the primary driver of people feeling lousy after looking through their feed there. Still, outright antagonism with Instagram, given it isn't an open public graph like Twitter, is harder. The one direct vector is comments and Instagram is working on that issue.

In being a pure audio-visual network at a time when Facebook and most other networks were mixed-media, Instagram siphoned off many people for whom the best part of Facebook was just the photos and videos; again, we often, as with Twitter, over-estimate the product-market fit and TAM of text. If Facebook just showed photos and videos for a week I suspect their usage would grow, but since they own Instagram...

As with other social networks that grow, Instagram broadened its formats early on to head off several format-based asymptotes. Non-square photos and videos with gradually lengthening time limits have broadened the use cases and, more importantly, removed some level of production friction.

The move to copy Snapchat's Stories format was the next giant asymptote avoided. The precious nature of sharing on Instagram was a drag on posting frequency. Stories solves the supply-side issue for content several ways. One is that since it requires you to explicitly tap into viewing it from the home feed screen, it shifts the onus for viewing the content entirely to the audience. This frees the content creator from much of the guilt of polluting someone else's feed. The expiring nature of the content further removes another of a publisher's inhibitions about littering the digital landscape. It unlocked so much content that I now regularly fail to make it through more than a tiny fraction of Stories on Instagram. Even friends who don't publish a lot now often put their content in Stories rather than posting to the main feed.

The very format of Stories, with its full-screen vertical orientation, cues the user that this format is meant for the native way we hold our devices as smartphone photographers, rather than accommodating the more natural landscape way that audiences view the world, with eyes side-by-side in one's head. Stories includes accoutrements like gaudy stickers and text overlays and face filters that aren't in the toolset for Instagram's main feed photo/video composer, perhaps to preserve some aesthetic separation between the main feed and Stories.

There is a purity about Instagram which makes even its ads perfectly native: everything on the service is an audio-visual advertisement. I see people complain about the ad load in Instagram, but if you really look at your feed, it's always had an ad load of 100%.

I just opened my feed and looked through the first twenty posts, and I'd classify them all as ads: about how great my meal was, for beautiful travel destinations, for the exquisite craft of various photographers and cinematographers, for an actor's upcoming film, for Rihanna's new lingerie line or makeup drop, for an elaborate dish a friend cooked, for a current concert tour, for how funny someone is, for someone's gorgeous new headshot, and for a few sporting events and teams. And yes, a few of them were official Instagram ads.

I don't mean this in a negative way. One might lobby this accusation at all social networks, but the visual nature of Instagram absorbs the signaling function of social media in the most elegant and unified way. For example, messaging apps consist of a lot of communication that isn't advertising. But that's exactly why a messaging app like Messenger isn't as lucrative an ad platform as Instagram is and will be. If ads weren't marked explicitly, and if they weren't so obviously from accounts I don't follow, it's not clear to me that they'd be so jarringly different in nature than all the other content in the feed.

The irony is that, as Facebook broadened its use cases and supported media types to continue to expand, the purity of Instagram may have made it more scalable a network in some ways.

Of course, every product or service has some natural ceiling. To take one example, messaging with other folks is still somewhat clunky on Instagram, it feels tacked on. Considering how much young people use Snapchat as a messaging app of choice, there's likely attractive headroom for Instagram here.

Rumors Instagram is contemplating a separate messaging app make sense. It would be ironic if Instagram separated out the more broadcast nature of its core app from the messaging use case in two different apps before Snapchat did. As noted earlier, it feels as if Snapchat is constantly fighting to balance the messaging parts of its app with the more broadcast elements like Stories and Discover, and separate apps might be one way to solve that more effectively.

As with all social networks which are mobile-phone dominant, there are limits to what can be optimized for in a single app, when all you have to work with is a single rectangular phone screen. The mobile phone revolution forced a focus in design which created billions of dollars in value, but Instagram, like all phone apps, will run into the asymptote that is the limits of how much you can jam into one app.

Instagram has already had some experience in dealing with this conundrum, creating separate apps like Boomerang or Hyperlapse that keep a lid on the complexity of the Instagram app itself and which bring additional composition techniques to the top level of one's phone. I often hear people counsel against launching separate apps because of the difficulty of getting adoption of even a single app, but that doesn't mean that separate apps aren't sometimes the most elegant way to deal with the spatial design constraints of mobile.

On Instagram, content is still largely short in nature so longer narratives aren't common or well-supported. The very structure, oriented around a main feed algorithmically compiling a variety of content from all the account you follow, isn't optimized towards a deep dive into a particular subject matter or narrative like, say, a television or a streaming video app. The closest to long-form on Instagram is Live, but most of what I see of that is only loosely narrative, resembling more an extended selfie than a considered narrative. Rather than pursue long-form narrative, it may be that a more on-brand way to tackle the challenge of lengthening usage of the app is better stringing together of existing content, similar to how Snapchat can aggregate content from one location into a feed of sorts. That can be useful for things like concerts and sporting events and breaking news events like natural disasters, protests, and marches.

In addition, perhaps there is a general limit to how far a single feed of random content arranged algorithmically can go before we suffer pure consumption exhaustion. Perhaps seeing curated snapshots from everyone will finally push us all to the breaking point of jealousy and FOMO and, across a large enough number of users, an asymptote will emerge.

However, I suspect we've moved into an age where the upper bound on vanity fatigue has shifted much higher in a way that an older generation might find unseemly. Just as we've moved into a post-scarcity age of information, I believe we've moved into a post-scarcity age of identity as well. And in this world, it's more acceptable to be yourself and leverage social media for maximal distribution of yourself in a way that ties to the fundamental way in which the topology of culture has shifted from a series of massive centralized hub and spokes to a more uniform mesh.

A last possible asymptote relates to my general sense that massive networks like Facebook and Instagram will, at some point, require more structured interactions and content units (for example, a list is a structured content unit, as is a check-in) to continue scaling. Doing so always imposes some additional friction on the content creator, but the benefit is breaking one monolithic feed into more distinct units, allowing users the ability to shift gears mentally by seeing and anticipating the structure, much like how a magazine is organized.

To fill gaps in a person's free time, an endless feed is like an endless jar of liquid, able to be poured into any crevice in one's schedule and flow of attention. To demand a person's time, on the other hand, is a higher order task, and more structured content seems to do better on that front. People set aside dedicated time to play games like Fortnite or to watch Netflix, but less so to browse feeds. The latter happens on the fly. But ambition in software-driven Silicon Valley is endless and so at some point every tech company tries to obtain the full complement of Infinity Stones, whether by building them or buying them, like Facebook did with Instagram and Whatsapp.

Amazon's next invisible asymptote?

I started with Amazon, but it is worth revisiting as it is hardly done with its own ambitions. After having made such massive progress on the shipping fee asymptote, what other barriers to growth might remain?

On that same topic of shipping, the next natural barrier is shipping speed. Yes, it's great that I don't have to pay for shipping, but in time customer expectations inflate. Per Jeff's latest annual letter to shareholders:

One thing I love about customers is that they are divinely discontent. Their expectations are never static – they go up. It’s human nature. We didn’t ascend from our hunter-gatherer days by being satisfied. People have a voracious appetite for a better way, and yesterday’s ‘wow’ quickly becomes today’s ‘ordinary’. I see that cycle of improvement happening at a faster rate than ever before. It may be because customers have such easy access to more information than ever before – in only a few seconds and with a couple taps on their phones, customers can read reviews, compare prices from multiple retailers, see whether something’s in stock, find out how fast it will ship or be available for pick-up, and more. These examples are from retail, but I sense that the same customer empowerment phenomenon is happening broadly across everything we do at Amazon and most other industries as well. You cannot rest on your laurels in this world. Customers won’t have it.

Why only two-day shipping for free? What if I want my package tomorrow, or today, or right now?

Amazon has already been working on this problem for over a decade, building out a higher density network of smaller distribution centers over its previous strategy of fewer, gargantuan distribution hubs. Drone delivery may have sounded like a joke when first announced on an episode of 60 Minutes, but it addresses the same problem, as does a strategy like Amazon lockers in local retail stores.

Another asymptote may be that while Amazon is great at being the site of first resort to fulfill customer demands for products, it is less capable when it comes to generating desire ex nihilo, the kind of persuasion typically associated more with a tech company like Apple or any number of luxury retailers.

At Amazon we referred to the dominant style of shopping on the service as spear-fishing. People come in, type a search for the thing they want, and 1-click it. In contrast, if you've ever gone to a mall with someone who loves shopping for shopping's sake, a clotheshorse for example, you'll see a method of shopping more akin to the gathering half of hunting and gathering. Many outfits are picked off the rack and gazed at, held up against oneself in a mirror, turned around and around in the hand for contemplation. Hands brush across racks of clothing, fingers feeling fabric in search of something unknown even to the shopper.

This is browsing, and Amazon's interface has only solved some aspects of this mode of shopping. If you have some idea what you want, similarities carousels can guide one in some comparison shopping, and customer reviews serve as a voice on the shoulder, but it still feels somewhat utilitarian.

Amazon's first attempts at physical stores reflect this bias in its retail style. I visited an Amazon physical bookstore in University Village the last time I was in Seattle, and it struck me as the website turned into 3-dimensional space, just with a lot less inventory. Amazon Go sounds more interesting, and I can't wait to try it out, but again, its primary selling point is the self-serve, low-friction aspect of the experience.

When I think of creating desire, I think of my last and only visit to Milan, when a woman at an Italian luxury brand store talked me into buying a sportcoat I had no idea I wanted when I walked into the store. In fact, it wasn't even on display, so minimal was the inventory when I walked in.

She looked at me, asked me some questions, then went to the back and walked back out with a single option. She talked me into trying it on, then flattered me with how it made me look, as well as pointing out some of its most distinctive qualities. Slowly, I began to nod in agreement, and eventually I knew I had to be the man this sportcoat would turn me into when it sat on my shoulders.

This challenge isn't unique to Amazon. Tech companies in general have been mining the scalable ROI of machine learning and algorithms for many years now. More data, better recommendations, better matching of customer to goods, or so the story goes. But what I appreciate about luxury retail, or even Hollywood, is its skill for making you believe that something is the right thing for you, absent previous data. Seduction is a gift, and most people in technology vastly overestimate how much of customer happiness is solvable by data-driven algorithms while underestimating the ROI of seduction.

Netflix spent $1 million on a prize to improve its recommendation algorithms, and yet it's a daily ritual that millions of people stare at their Netflix home screen, scrolling around for a long time, trying to decide what to watch. It's not just Netflix, open any streaming app. The AppleTV, a media viewing device, is most often praised for its screensaver! That's like admitting you couldn't find anything to eat on a restaurant menu but the typeface was pleasing to the eye. It's not that data can't guide a user towards the right general neighborhood, but more than one tech company will find the gradient of return on good old seduction to be much steeper than they might realize.

Still, for Amazon, this may not be as dangerous a weakness as it would be for another retailer. Much of what Amazon sells is commodities, and desire generation can be offloaded to other channels who then see customers leak to Amazon for fulfillment. Amazon's logistical and customer service supremacy is a devastatingly powerful advantage because it directly precedes and follows the act of payment in the shopping value chain, allowing it to capture almost all the financial return of commodity retail.

And, as Jeff's annual letter to shareholders has emphasized from the very first instance, Amazon's mission is to be the world's most customer-centric company. One way to continue to find vectors for growth is to stay attached at the hip to the fickle nature of customer unhappiness, which they're always quite happy to share under the right circumstances, one happy consequence of this age of outrage. There is such a thing as a price umbrella, but there's also one for customer happiness.

How to identify your invisible asymptotes

One way to identify your invisible asymptotes is to simply ask your customers. As I noted at the start of this piece, at Amazon we honed in on how shipping fees were a brake on our business by simply asking customers and non-customers.

Here's where the oft-cited quote from Henry Ford is brought up as an objection: “If I had asked people what they wanted, they would have said faster horses," he is reputed to have said. Like most truisms in business, it is snappy and lossy all at once.

True, it's often difficult for customers to articulate what they want. But what's missed is that they're often much better at pinpointing what they don't want or like. What you should hear when customers say they want a faster horse is not the literal but instead that they find travel by horse to be too slow. The savvy product person can generalize that to the broader need of traveling more quickly, and that problem can be solved any number of ways that don't involve cloning Secretariat or shooting their current horse up with steroids.

This isn't a foolproof strategy. Sometimes customers lie about what they don't like, and sometimes they can't even articulate their discontent with any clarity, but if you match their feedback with good analysis of customer behavior data and even some well-designed tests, you can usually land on a more accurate picture of the actual problem to solve.

A popular sentiment in Silicon Valley is that B2C businesses are more difficult product challenges than B2B because products and services for the business customer can be specified merely by talking to the customer while the consumer market is inarticulate about its needs, per the Henry Ford quote. Again, that's only partially true, and so many consumer companies I've been advising recently haven't pushed enough yet on understanding or empathizing with the objections of its non-adopters.

We speak often of the economics concept of the demand curve, but in product there is another form of demand curve, and that is the contour of the customers' demands of your product or service. How comforting it would be if it were flat, but as Bezos noted in his annual letter to shareholders, the arc of customer demands is long, but it bends ever upwards. It's the job of each company, especially its product team, to continue to be in tune with the topology of this "demand curve."

I see many companies spend time analyzing funnels and seeing who emerges out the bottom. As a company grows, though, and from the start, it's just as important to look at those who never make it through the funnel, or who jump out of it at the very top. If the product market fit gradient likely differs for each of your current and potential customer segments, and understanding how and why is a never-ending job.

When companies run focus groups on their products, they often show me the positive feedback. I'm almost invariably more interested in the folks who've registered negative feedback, though I sense many product teams find watching that material to be stomach-churning. Sometimes the feedback isn't useful in the moment; perhaps you have such strong product-market fit with a different cohort that it isn't useful. Still, it's never not a bit of a prick to the ego.

However, all honest negative feedback forms the basis of some asymptote in some customer segment, even if the constraint isn't constricting yet. Even if companies I meet with don't yet have an idea of how to deal with a problem, I'm always curious to see if they have a good explanation for what that problem is.

One important sidenote on this topic is that I'm often invited to give product feedback, more than I can find time for these days. When I'm doing so in person, some product teams can't help but jump in as soon as I raise any concerns, just to show they've already anticipated my objections.

I advise just listening all the way through the first time, to hear the why of someone's feedback, before cutting them off. You'll never be there in person with each customer to talk them out of their reasoning, your product or service has to do that work. The batting average of product people who try to explain to their customers why they're wrong is...not good. It's a sure way to put them off of giving you feedback in the future, too.

Even absent external feedback, it's possible to train yourself to spot the limits to your product. One approach I've taken when talking to companies who are trying to achieve initial or new product-market fit is to ask them why every person in the world doesn't use their product or service. If you ask yourself that, you'll come up with all sorts of clear answers, and if you keep walking that road you'll find the borders of your TAM taking on greater and greater definition.

[It's true that you also need the flip side, an almost irrational positivity, to be able to survive the difficult task of product development, or to be an entrepreneur, but selection bias is such that most such people start with a surplus of optimism.]

Lastly, though I hesitate to share this, it is possible to avoid invisible asymptotes through sheer genius of product intuition. I balk for the same reason I cringe when I meet CEO's in the valley who idolize Steve Jobs. In many ways, a product intuition that is consistently accurate across time is, like Steve Jobs, a unicorn. It's so rare an ability that to lean entirely on it is far more dangerous and high risk than blending it with a whole suite of more accessible strategies.

It's difficult for product people to hear this because there's something romantic and heroic about the Steve Jobs mythology of creation, brilliant ideas springing from the mind of the mad genius and inventor. However, just to read a biography of Jobs is to understand how rare a set of life experiences and choices shaped him into who he was. Despite that, we've spawned a whole bunch of CEO's who wear the same outfit every day and drive their design teams crazy with nitpick design feedback as if the outward trappings of the man were the essence of his skill. We vastly underestimate the path dependence of his particular style of product intuition.

Jobs' gift is so rare that it's likely even Apple hasn't been able to replace it. It's not a coincidence that the Apple products that frustrate me the most right now are all the ones with "Pro" in the name. The MacBook Pro, with its flawed keyboard and bizarre Touch Bar (I'm still using the old 13" MacBook Pro with the old keyboard, hoping beyond hope that Apple will come to its senses before it becomes obsolete). The Mac Pro, which took on the unfortunately apropos shape of a trash can in its last incarnation and whose replacement hasn't shipped in years (I'm still typing this at home on an ancient cheese grater Mac Pro tower and ended up building a PC tower to run VR and to do photo and video editing). Final Cut Pro, which I learned on in film editing school, and which got zapped in favor of Final Cut X just when FCP was starting to steal meaningful share in Hollywood from Avid. The iMac Pro, which isn't easily upgradable but great if you're a gazillionaire.

Pro customers are typically ones with the most clearly specified needs and workflows. Thus, their products are ones for whom listening to them articulate what they want is a reliable path to establishing and maintaining product-market fit. But that's not something Apple seems to enjoy doing, and so the mis-steps they've made on these lines are exactly the types of mistakes you'd expect of them.

[I was overjoyed to read that Apple's next Mac Pro is being built using extensive feedback from media professionals. It's disappointing that it won't arrive until 2019 now but at least Apple has descended from the ivory tower to talk to the actual future users. It's some of the best news out of Apple I've heard in forever.]

Live by intuition, die by it. It's not surprising that Snapchat, another company that lives by the product intuition of one person, stumbled with a recent redesign. That a company's strengths are its weaknesses is simply the result of tight adherence to methodology. Apple and Snapchat's deus ex machina style of handing down products also rid us of CD-ROM drives and produced the iPhone, AirPods, the camera-first messaging app, and the Story format, among many other breakthroughs which a product person could hang a career on.

Because products and services live in an increasingly dynamic world, especially those targeted at consumers, they aren't governed by the immutable, timeless truths of a field like mathematics. The reason I recommend a healthy mix of intuition informed by data and feedback is that most product people I know have a product view that is slower moving than the world itself. If they've achieved any measure of success, it's often because their view of some consumer need was the right one at the right time. Product-market fit as tautology. Selection bias in looking at these people might confuse some measure of luck with some enduring product intuition.

However, just as a VC might have gotten lucky once with some investment and be seen as a genius for life (and the returns to a single buff of a VC brand name is shockingly durable), just because a given person's product intuition might hit on the right moment at the right point in history to create a smash hit, it's rare that a single person's frame will move in lock step with that of the world. How many creatives are relevant for a lifetime?

This is one reason sustained competitive advantage is so difficult. In the long run, endogenous variance in the quality of product leadership in a company always seems to be in the negative direction. But perhaps we are too focused on management quality and not focused enough on exogenous factors. In "Divine Discontent: Disruption’s Antidote," Ben Thompson writes:

Bezos’s letter, though, reveals another advantage of focusing on customers: it makes it impossible to overshoot. When I wrote that piece five years ago, I was thinking of the opportunity provided by a focus on the user experience as if it were an asymptote: one could get ever closer to the ultimate user experience, but never achieve it:

stratechery-disruption-diagram-1.png

In fact, though, consumer expectations are not static: they are, as Bezos’ memorably states, “divinely discontent”. What is amazing today is table stakes tomorrow, and, perhaps surprisingly, that makes for a tremendous business opportunity: if your company is predicated on delivering the best possible experience for consumers, then your company will never achieve its goal.

stratechery-disruption-diagram-2.png

In the case of Amazon, that this unattainable and ever-changing objective is embedded in the company’s culture is, in conjunction with the company’s demonstrated ability to spin up new businesses on the profits of established ones, a sort of perpetual motion machine; I’m not sure that Amazon will beat Apple to $1 trillion, but they surely have the best shot at two.

Pattern recognition is the default operation mode of much of Silicon Valley and other fields, but it is almost always, by its very nature, backwards-looking. One can hardly blame most people for resorting to it because it's a way of minimizing blame, and the economic returns of the Valley are so amplified by the structural advantages of winners that even matching market beta makes for a comfortable living.

However, if consumer desires are shifting, it's always just a matter of time before pattern recognition leads to an invisible asymptote. One reason startups are often the tip of the spear for innovation in technology is that they can't rely on market beta to just roll along. Achieving product-market fit for them is an existential challenge, and they have no backup plans. Imagine an investor who has to achieve alpha to even survive.

Companies can stay nimble by turning over its product leaders, but as a product professional, staying relevant to the marketplace is a never-ending job, even if your own life is irreversible and linear. I find the best way to unmoor myself from my most strongly held product beliefs is to increase my inputs. Besides, the older I get, the more I've grown to enjoy that strange dance with the customer. Leading a partner in a dance may give you a feeling of control, but it's a world of difference from dancing by yourself.

One of my favorite Ben Thompson posts is "What Clayton Christensen Got Wrong" in which he built on Christensen's theory of disruption to note that low end disruption can be avoided if you can differentiate on user experience. It is difficult and perhaps even impossible to over-serve on that axis. Tesla came into the electric car market with a car that was way more expensive than internal combustion engine cars (this definitely wasn't low-end disruption), had shorter range, and required really slow charging at a time when very few public chargers existed yet.

However, Tesla got an interesting foothold because on another axis it really delivered. Yes, the range allowed for more commuting without having to charge twice a day, but more importantly, for the wealthy, it was a way to signal one's environmental consciousness in a package that was much, much sexier than the Prius, the previous electric car of choice of celebrities in LA. It will be hard for Tesla to continue to rely on that in the long run as the most critical dimension of user experience will likely evolve, but it's a good reminder that "user experience" is broad enough to encompass many things, some less measurable than others.

You can't overserve on user experience, Thompson argues; as a product person, I'd argue, in parallel, that it is difficult and likely impossible to understand your customer too deeply. Amazon's mission to the be the world's most customer-centric company is inherently a long-term strategy because it is a one with an infinite time scale and no asymptote to its slope.

In my experience, the most successful people I know are much more conscious of their own personal asymptotes at a much earlier age than others. They ruthlessly and expediently flush them out. One successful person I know determined in grade school that she'd never be a world-class tennis player or pianist. Another mentioned to me how, in their freshman year of college, they realized they'd never be the best mathematician in their own dorm, let alone in the world. Another knew a year into a job that he wouldn't be the best programmer at his company and so he switched over into management; he rose to become CEO.

By discovering their own limitations early, they are also quicker to discover vectors on which they're personally unbounded. Product development will always be a multi-dimensional problem, often frustratingly so, but the value of reducing that dimensionality often costs so little that it should be more widely employed.

This isn't to say a person needs to aspire to be the best at everything they do. I'm at peace with the fact that I'll likely always be a middling cook, that I won't win the Tour de France, and that I'm destined to be behind a camera and not in front of it. When it comes to business, however, and surviving in the ruthless Hobbesian jungle, where much more is winner-take-all than it once was, the idea that you can be whatever you want to be, or build whatever you want to build, is a sure path to a short, unhappy existence.