Status as a Service (StaaS)

Editor's Note 1: I have no editor.

Editor’s Note 2: I would like to assure new subscribers to this blog that most my posts are not as long as this one. Or as long as my previous one. My long break from posting here means that this piece is a collection of what would’ve normally been a series of shorter posts. I put section titles below, so skip any that don’t interest you. My short takes are on Twitter. All that said, I apologize for nothing.

Editor's Note 3: I lied, I apologize for one thing, and that is my long writing hiatus. Without a work computer, I had to resort to using my 7 year old 13" Macbook Pro as my main computer, and sometime last year my carpal tunnel syndrome returned with a vengeance and left my wrists debilitated with pain. I believe all of you who say your main computer is a laptop or, shudder, an iPad, but goodness gracious I cannot type on a compact keyboard for long periods of time without having my hands turn into useless stumps. It was only the return to typing almost exclusively on my old friend the Kinesis Advantage 2 ergo keyboard that put me back in the game.

Editor’s Note 4: I was recently on Patrick O'Shaughnessy's podcast Invest Like the Best, and near the end of that discussion, I mentioned a new essay I'd been working on about the similarities between social networks and ICO's. This is that piece.

Status-Seeking Monkeys

"It is a truth universally acknowledged, that a person in possession of little fortune, must be in want of more social capital."

So wrote Jane Austen, or she would have, I think, if she were chronicling our current age (instead we have Taylor Lorenz, and thank goodness for that).

Let's begin with two principles:

  • People are status-seeking monkeys*

  • People seek out the most efficient path to maximizing social capital

* Status-Seeking Monkeys will also be the name of my indie band, if I ever learn to play the guitar and start a band

I begin with these two observations of human nature because few would dispute them, yet I seldom see social networks, some of the largest and fastest-growing companies in the history of the world, analyzed on the dimension of status or social capital.

It’s in part a measurement issue. Numbers lend an air of legitimacy and credibility. We have longstanding ways to denominate and measure financial capital and its flows. Entire websites, sections of newspapers, and a ton of institutions report with precision on the prices and movements of money.

We have no such methods for measuring the values and movement of social capital, at least not with anywhere near the accuracy or precision. The body of research feels both broad and yet meager. If we had better measures besides user counts, this piece and many others would be full of charts and graphs that added a sense of intellectual heft to the analysis. There would be some annual presentation called the State of Social akin to Meeker's Internet Trends Report, or perhaps it would be a fifty page sub-section of her annual report.

Despite this, most of the social media networks we study generate much more social capital than actual financial capital, especially in their early stages; almost all such companies have internalized one of the popular truisms of Silicon Valley, that in the early days, companies should postpone revenue generation in favor of rapid network scaling. Social capital has much to say about why social networks lose heat, stall out, and sometimes disappear altogether. And, while we may not be able to quantify social capital, as highly attuned social creatures, we can feel it.

Social capital is, in many ways, a leading indicator of financial capital, and so its nature bears greater scrutiny. Not only is it good investment or business practice, but analyzing social capital dynamics can help to explain all sorts of online behavior that would otherwise seem irrational.

In the past few years, much progress has been made analyzing Software as a Service (SaaS) businesses. Not as much has been made on social networks. Analysis of social networks still strikes me as being like economic growth theory long before Paul Romer's paper on endogenous technological change. However, we can start to demystify social networks if we also think of them as SaaS businesses, but instead of software, they provide status. This post is a deep dive into what I refer to as Status as a Service (StaaS) businesses.

Think of this essay as a series of strongly held hypotheses; without access to the types of data which i’m not even sure exists, it’s difficult to be definitive. As ever, my wise readers will add or push back as they always do.

Traditional Network Effects Model of Social Networks

One of the fundamental lessons of successful social networks is that they must first appeal to people when they have few users. Typically this is done through some form of single-user utility.

This is the classic cold start problem of social. The answer to the traditional chicken-and-egg question is actually answerable: what comes first is a single chicken, and then another chicken, and then another chicken, and so on. The harder version of the question is why the first chicken came and stayed when no other chickens were around, and why the others followed.

The second fundamental lessons is that social networks must have strong network effects so that as more and more users come aboard, the network enters a positive flywheel of growth, a compounding value from positive network effects that leads to hockey stick growth that puts dollar signs in the eyes of investors and employees alike. "Come for the tool, stay for the network" wrote Chris Dixon, in perhaps the most memorable maxim for how this works.

Even before social networks, we had Metcalfe's Law on telecommunications networks:

The value of a telecommunications network is proportional to the square of the number of connected users of the system (n^2)

This ported over to social networks cleanly. It is intuitive, and it includes that tantalizing math formula that explains why growth curves for social networks bends up sharply at the ankle of the classic growth S-curve.

But dig deeper and many many questions remain. Why do some large social networks suddenly fade away, or lose out to new tiny networks? Why do some new social networks with great single-player tools fail to transform into networks, while others with seemingly frivolous purposes make the leap? Why do some networks sometimes lose value when they add more users? What determines why different networks stall out at different user base sizes? Why do some networks cross international borders easily while others stay locked within specific countries? Why, if Metcalfe's Law holds, do many of Facebook's clones of other social network features fail, while some succeed, like Instagram Stories?

What ties many of these explanations together is social capital theory, and how we analyze social networks should include a study of a social network's accumulation of social capital assets and the nature and structure of its status games. In other words, how do such companies capitalize, either consciously or not, on the fact that people are status-seeking monkeys, always trying to seek more of it in the most efficient way possible?

To paraphrase Nicki Minaj, “If I'm fake I ain't notice cause my followers ain't.”

[Editor’s note: sometimes the followers actually are fake.]

Utility vs. Social Capital Framework

Classic network effects theory still holds, I’m not discarding it. Instead, let's append some social capital theory. Together, those form the two axes on which I like to analyze social network health.

Actually, I tend to use three axes to dissect social networks.

The three axes on which I evaluate social network strength

For this post, though, I'm only going to look at two of them, utility and social capital, as the entertainment axis adds a whole lot of complexity which I'll perhaps explain another time.

The basic two axis framework guiding much of the social network analysis in this piece

Utility doesn't require much explanation, though we often use the term very loosely and categorize too many things as utility when they aren't that useful (we generally confuse circuses for bread and not the reverse; Fox News, for example, is more entertainment than utility, as is common of many news outlets). A social network like Facebook allows me to reach lots of people I would otherwise have a harder time tracking down, and that is useful. A messaging app like WhatsApp allows me to communicate with people all over the world without paying texting or incremental data fees, which is useful. Quora and Reddit and Discord and most every social network offer some forms of utility.

The other axis is, for a lack of a more precise term, the social capital axis, or the status axis. Can I use the social network to accumulate social capital? What forms? How is it measured? And how do I earn that status?

There are several different paths to success for social networks, but those which compete on the social capital axis are often more mysterious than pure utilities. Competition on raw utility tends to be Darwinian, ruthless, and highly legible. This is the world, for example, of communication services like messaging and video conferencing. Investing in this space also tends to be a bit more straightforward: how useful is your app or service, can you get distribution, etc. When investors send me decks on things in this category, I am happy to offer an opinion, but I enjoy puzzling over the world of artificial prestige even more.

The creation of a successful status game is so mysterious that it often smacks of alchemy. For that reason, entrepreneurs who succeed in this space are thought of us a sort of shaman, perhaps because most investors are middle-aged white men who are already so high status they haven't the first idea why people would seek virtual status (more on that later).

With the rise of Instagram, with its focus on photos and filters, and Snapchat, with its ephemeral messaging, and Vine, with its 6-second video limit, for a while there was a thought that new social networks would be built on some new modality of communications. That's a piece of it, but it's not the complete picture, and not for the reasons many people think, which is why we have seen a whole bunch of strange failed experiments in just about every odd combinations of features and filters and artificial constraints in how we communicate with each other through our phones. Remember Facebook's Snapchat competitor Slingshot, in which you had to unlock any messages you received by responding with a message? It felt like product design by mad libs.

When modeling how successful social networks create a status game worth playing, a useful metaphor is one of the trendiest technologies: cryptocurrency.

Social Networks as ICO's

How is a new social network analogous to an ICO?

  1. Each new social network issues a new form of social capital, a token.

  2. You must show proof of work to earn the token.

  3. Over time it becomes harder and harder to mine new tokens on each social network, creating built-in scarcity.

  4. Many people, especially older folks, scoff at both social networks and cryptocurrencies.

["Why does anyone care what you ate for lunch?" is the canonical retort about any social network, though it’s fading with time. Both social networks and ICO's tend to drive skeptics crazy because they seem to manufacture value out of nothing. The shifting nature of scarcity will always leave a wake of skepticism and disbelief.]

Years ago, I stayed at the house of a friend whose high school daughter was home upstairs with a classmates. As we adults drank wine in the kitchen downstairs while waiting for dinner to finish in the oven, we heard lots of music and stomping and giggling coming from upstairs.

When we finally called them down for dinner, I asked them what all the ruckus had been. My friend's daughter proudly held up her phone to show me a recording they'd posted to an app called Musical.ly. It was a lip synch and dance routine replete with their own choreography. They'd rehearsed the piece more times than they could count. It showed. Their faces were shiny with sweat, and they were still breathing hard from the exertion. Proof of work indeed.

I spent the rest of the dinner scrolling through the app, fascinated, interviewing the girls about what they liked about the app, why they were on it, what share of their free time it had captured. I can't tell if parents are offended or glad when I spend much of the time visiting them interviewing their sons and daughters instead, but in the absence of good enough metrics with which to analyze this space, I subscribe to the Jane Goodall theory of how to study your subject. Besides, status games of adults are already well covered by the existing media, from literature to film. Children's status games, once familiar to us, begin to fade from our memory as time passes, and its modern forms have been drastically altered by social media.

Other examples abound. Perhaps you've read a long and thoughtful response by a random person on Quora or Reddit, or watched YouTube vloggers publishing night after night, or heard about popular Vine stars living in houses together, helping each other shoot and edit 6-second videos. While you can outsource Bitcoin mining to a computer, people still mine for social capital on social networks largely through their own blood, sweat, and tears.

[Aside: if you yourself are not an aspiring social network star, living with one is...not recommended.]

Perhaps, if you've spent time around today's youth, you've watched with a mixture of horror and fascination as a teen snaps dozens of selfies before publishing the most flattering one to Instagram, only to pull it down if it doesn't accumulate enough likes within the first hour. It’s another example of proof of work, or at least vigorous market research.

Almost every social network of note had an early signature proof of work hurdle. For Facebook it was posting some witty text-based status update. For Instagram, it was posting an interesting square photo. For Vine, an entertaining 6-second video. For Twitter, it was writing an amusing bit of text of 140 characters or fewer. Pinterest? Pinning a compelling photo. You can likely derive the proof of work for other networks like Quora and Reddit and Twitch and so on. Successful social networks don't pose trick questions at the start, it’s usually clear what they want from you.

[An aside about exogenous social capital: you might complain that your tweets are more interesting and grammatical than those of, say, Donald Trump (you're probably right!). Or that your photos are better composed and more interesting at a deep level of photographic craft than those of Kim Kardashian. The difference is, they bring a massive supply of exogenous pre-existing social capital from another status game, the fame game, to every table, and some forms of social capital transfer quite well across platforms. Generalized fame is one of them. More specific forms of fame or talent might not retain their value as easily: you might follow Paul Krugman on Twitter, for example, but not have any interest in his Instagram account. I don't know if he has one, but I probably wouldn't follow it if he did, sorry Paul, it’s nothing personal.]

If you've ever joined one of these social networks early enough, you know that, on a relative basis, getting ahead of others in terms of social capital (followers, likes, etc.) is easier in the early days. Some people who were featured on recommended follower lists in the early days of Twitter have follower counts in the 7-figures, just as early masters of Musical.ly and Vine were accumulated massive and compounding follower counts. The more people who follow you, the more followers you gain because of leaderboards and recommended follower algorithms and other such common discovery mechanisms.

It's true that as more people join a network, more social capital is up for grabs in the aggregate. However, in general, if you come to a social network later, unless you bring incredible exogenous social capital (Taylor Swift can join any social network on the planet and collect a massive following immediately), the competition for attention is going to be more intense than it was in the beginning. Everyone has more of an understanding of how the game works so the competition is stiffer.

Why Proof of Work Matters

Why does proof of work matter for a social network? If people want to maximize social capital, why not make that as easy as possible?

As with cryptocurrency, if it were so easy, it wouldn't be worth anything. Value is tied to scarcity, and scarcity on social networks derives from proof of work. Status isn't worth much if there's no skill and effort required to mine it. It's not that a social network that makes it easy for lots of users to perform well can't be a useful one, but competition for relative status still motivates humans. Recall our first tenet: humans are status-seeking monkeys. Status is a relative ladder. By definition, if everyone can achieve a certain type of status, it’s no status at all, it’s a participation trophy.

Musical.ly created a hurdle for gaining followers and status that wasn't easily cleared by many people. However, for some, especially teens, and especially girls, it was a status game at which they were particularly suited to win. And so they flocked there, because, according to my second tenet, people look for the most efficient ways to accumulate the most social capital.

Recall Twitter in the early days, when it was somewhat of a harmless but somewhat inert status update service. I went back to look at my first few tweets on the service from some 12 years ago and my first two, spaced about a year apart, were both about doing my taxes. Looking back at them, I bore even myself. Early Twitter consisted mostly of harmless but dull life status updates, a lot of “is this thing on?” tapping on the virtual microphone. I guess I am in the camp of not caring about what you had for lunch after all. Get off my lawn, err, phone screen!

What changed Twitter, for me, was the launch of Favstar and Favrd (both now defunct, ruthlessly murdered by Twitter), these global leaderboards that suddenly turned the service into a competition to compose the most globally popular tweets. Recall, the Twitter graph was not as dense then as it was now, nor did distribution accelerants like one-click retweeting and Moments exist yet.

What Favstar and Favrd did was surface really great tweets and rank them on a scoreboard, and that, to me, launched the performative revolution in Twitter. It added needed feedback to the feedback loop, birthing a new type of comedian, the master of the 140 character or less punchline (the internet has killed the joke, humor is all punchline now that the setup of the joke is assumed to be common knowledge thanks to Google).

The launch of these global tweet scoreboards reminds me of the moment in the now classic film** Battle Royale when Beat Takeshi Kitano informs a bunch of troublemaking school kids that they’ve been deported to an island are to fight to the death, last student standing wins, and that those who try to sneak out of designated battle zones will be killed by explosive collars. I'm not saying that Twitter is a life-or-death struggle, but you need only time travel back to pre-product-market-fit Twitter to see the vast difference in tone.

**Now classic because Battle Royale has subsequently been ripped off, err, paid tribute to by The Hunger Games, Fortnite, Maze Runner, and just about every YA franchise out there because who understands barbarous status games better than teenagers?

Favstar.fm screenshot. Just seeing some of those old but familiar avatars makes me sentimental, perhaps like how early Burning Man devotees think back on its early years, before the moneyed class came in and ruined that utopia of drugs, nudity, and art.

Chasing down old Favrd screenshots, I still laugh at the tweets surfaced.

One more Favrd screenshot just for old time’s sake

It's critical that not everyone can quip with such skill. This gave Twitter its own proof of work, and over time the overall quality of tweets improved as that feedback loop spun and tightened. The strategies that gained the most likes were fed in increasing volume into people's timelines as everyone learned from and competed with each other.

Read Twitter today and hardly any of the tweets are the mundane life updates of its awkward pre-puberty years. We are now in late-stage performative Twitter, where nearly every tweet is hungry as hell for favorites and retweets, and everyone is a trained pundit or comedian. It's hot takes and cool proverbs all the way down. The harmless status update Twitter was a less thirsty scene but also not much of a business. Still, sometimes I miss the halcyon days when not every tweet was a thirst trap. I hate the new Kanye, the bad mood Kanye, the always rude Kanye, spaz in the news Kanye, I miss the sweet Kanye, chop up the beats Kanye.

Thirst for status is potential energy. It is the lifeblood of a Status as a Service business. To succeed at carving out unique space in the market, social networks offer their own unique form of status token, earned through some distinctive proof of work.

Conversely, let's look at something like Prisma, a photo filter app which tried to pivot to become a social network. Prisma surged in popularity upon launch by making it trivial to turn one of your photos into a fine art painting with one of its many neural-network-powered filters.

It worked well. Too well.

Since almost any photo could, with one-click, be turned into a gorgeous painting, no single photo really stands out. The star is the filter, not the user, and so it didn't really make sense to follow any one person over any other person. Without that element of skill, no framework for a status game or skill-based network existed. It was a utility that failed at becoming a Status as a Service business.

In contrast, while Instagram filters, in its earliest days, improved upon the somewhat limited quality of smartphone photos at the time, the quality of those photos still depended for the most part on the photographer. The composition, the selection of subject matter, these still derived from the photographer’s craft, and no filter could elevate a poor photo into a masterpiece.

So, to answer an earlier question about how a new social network takes hold, let’s add this: a new Status as a Service business must devise some proof of work that depends on some actual skill to differentiate among users. If it does, then it creates, like an ICO, some new form of social capital currency of value to those users.

This is not the only way a social network can achieve success. As noted before, you can build a network based around utility or entertainment. However, the addition of status helps us to explain why some networks which seemingly offer little in the way of meaningful utility (is a service that forces you to make only a six second video useful?) still achieve traction.

Facebook's Original Proof of Work

You might wonder, how did Facebook differentiate itself from MySpace? It started out as mostly a bunch of text status updates, nothing necessarily that innovative.

In fact, Facebook launched with one of the most famous proof of work hurdles in the world: you had to be a student at Harvard. By requiring a harvard.edu email address, Facebook drafted off of one of the most elite cultural filters in the world. It's hard to think of many more powerful slingshots of elitism.

By rolling out, first to Ivy League schools, then to colleges in general, Facebook scaled while maintaining a narrow age dispersion and exclusivity based around educational credentials.

Layer that on top of the broader social status game of stalking attractive members of the other sex that animates much of college life and Facebook was a service that tapped into reserves of some of the most heated social capital competitions in the world.

Social Capital ROI

If a person posts something interesting to a platform, how quickly do they gain likes and comments and reactions and followers? The second tenet is that people seek out the most efficient path to maximize their social capital. To do so, they must have a sense for how different strategies vary in effectiveness. Most humans seem to excel at this.

Young people, with their much higher usage rate on social media, are the most sensitive and attuned demographic to the payback period and ROI on their social media labor. So, for example, young people tend not to like Twitter but do enjoy Instagram.

It's not that Twitter doesn't dole out the occasional viral supernova; every so often someone composes a tweet that goes over 1K and then 10K likes or retweets (Twitter should allow people to buy a framed print of said tweet with a silver or gold 1K club or 10K club designation to supplement its monetization). But it’s not common, and most tweets are barely seen by anyone at all. Pair that with the fact that young people's bias towards and skill advantage in visual mediums over textual ones and it's not surprising Instagram is their social battleground of preference (video games might be the most lucrative battleground for the young if you broaden your definition of social networks, and that's entirely reasonable, though that arena skews male).

Instagram, despite not having any official reshare option, allows near unlimited hashtag spamming, and that allows for more deterministic, self-generated distribution. Twitter also isn't as great for spreading visual memes because of its stubborn attachment to cropping photos to maintain a certain level of tweet density per phone screen.

The gradient of your network's social capital ROI can often govern your market share among different demographics. Young girls flocked to Musical.ly in its early days because they were uniquely good at the lip synch dance routine videos that were its bread and butter. In this age of neverending notifications, heavy social media users are hyper aware of differing status ROI among the apps they use.

I can still remember posting the same photos to Flickr and Instagram for a while and seeing how quickly the latter passed the former in feedback. If I were an investor or even an employee, I might have something like a representative basket of content that I'd post from various test accounts on different social media networks just to track social capital interest rates and liquidity among the various services.

Some features can increase the reach of content on any network. A reshare option like the retweet button is a massive accelerant of virality on apps where the social graph determines what makes it into the feed. In an effort to increase engagement, Twitter has, over the years, become more and more aggressive to increase the liquidity of tweets. It now displays tweets that were liked by people you follow, even if they didn't retweet them, and it has populated its search tab with Moments, which, like Instagram's Discover Tab, guesses at other content you might like and provides an endless scroll filled with it.

TikTok is an interesting new player in social media because its default feed, For You, relies on a machine learning algorithm to determine what each user sees; the feed of content from by creators you follow, in contrast, is hidden one pane over. If you are new to TikTok and have just uploaded a great video, the selection algorithm promises to distribute your post much more quickly than if you were on sharing it on a network that relies on the size of your following, which most people have to build up over a long period of time. Conversely, if you come up with one great video but the rest of your work is mediocre, you can't count on continued distribution on TikTok since your followers live mostly in a feed driven by the TikTok algorithm, not their follow graph.

The result is a feedback loop that is much more tightly wound that that of other social networks, both in the positive and negative direction. Theoretically, if the algorithm is accurate, the content in your feed should correlate most closely to quality of the work and its alignment with your personal interests rather than the drawing from the work of accounts you follow. At a time when Bytedance is spending tens (hundreds?) of millions of marketing dollars in a bid to acquire users in international markets, the rapid ROI on new creators' work is a helpful quality in ensuring they stick around.

This development is interesting for another reason: graph-based social capital allocation mechanisms can suffer from runaway winner-take-all effects. In essence, some networks reward those who gain a lot of followers early on with so much added exposure that they continue to gain more followers than other users, regardless of whether they've earned it through the quality of their posts. One hypothesis on why social networks tend to lose heat at scale is that this type of old money can't be cleared out, and new money loses the incentive to play the game.

One of the striking things about Silicon Valley as a region versus East Coast power corridors like Manhattan is its dearth of old money. There are exceptions, but most of the fortunes in the Bay Area are not just new money but freshly minted new money from this current generation of tech. You have some old VC or semiconductor industry fortunes, but most of those people are still alive.

It's in NYC that you run into multi-generational old money hanging around on the Upper East or West sides of Manhattan, or encounter old wealth being showered around town by young socialites whose source of wealth is simply a fortuitous last name. Trickle down economics works, but often just down the veins of family trees.

It's not that the existence of old money or old social capital dooms a social network to inevitable stagnation, but a social network should continue to prioritize distribution for the best content, whatever the definition of quality, regardless of the vintage of user producing it. Otherwise a form of social capital inequality sets in, and in the virtual world, where exit costs are much lower than in the real world, new users can easily leave for a new network where their work is more properly rewarded and where status mobility is higher.

It may be that Silicon Valley never comes to be dominated by old money, and I'd consider that a net positive for the region. I'd rather the most productive new work be rewarded consistently by the marketplace than a bunch of stagnant quasi-monopolies hang on to wealth as they reach bloated scales that aren't conducive to innovation. The same applies to social networks and multi-player video games. As a newbie, how quickly, if you put in the work, are you "in the game"? Proof of work should define its own meritocracy.

The same way many social networks track keystone metrics like time to X followers, they should track the ROI on posts for new users. It's likely a leading metric that governs retention or churn. It’s useful as an investor, or even as a curious onlooker to test a social networks by posting varied content from test accounts to gauge the efficiency and fairness of the distribution algorithm.

Whatever the mechanisms, social networks must devote a lot of resources to market making between content and the right audience for that content so that users feel sufficient return on their work. Distribution is king, even when, or especially when it allocates social capital.

Why copying proof of work is lousy strategy for status-driven networks

We often see a new social network copy a successful incumbent but with a minor twist thrown in. In the wake of Facebook’s recent issues, we may see some privacy-first social networks, but we have an endless supply of actual knockoffs to study. App.net and then Mastodon were two prominent Twitter clones that promised some differentiation but which built themselves on the same general open messaging framework.

Most of these near clones have and will fail. The reason that matching the basic proof of work hurdle of an Status as a Service incumbent fails is that it generally duplicates the status game that already exists. By definition, if the proof of work is the same, you're not really creating a new status ladder game, and so there isn't a real compelling reason to switch when the new network really has no one in it.

This isn't to say you can't copy an existing proof of work and succeed. After all, Facebook replaced social networks like MySpace and Friendster that came before it, and in the real world, new money sometimes becomes the new old money. You can build a better status game or create a more valuable form of status. Usually when such displacement occurs, though, it does so along the other dimension of pure utility.

For example, we have multiple messaging apps that became viable companies just by capturing a particular geographic market through localized network effects. We don't have one messaging app to rule them all in the world, but instead a bunch that have won in particular geographies. After all, the best messaging app in most countries or continents is the one most other people are already using there.

But in the same market? Copying a proof of work there is a tough road. The first mover advantage is also such that the leader with the dominant graph and the social capital of most value can look at any new features that fast followers launch and pull a reverse copy, grafting them into their more extensive and dominant incumbent graph.

In China, Tencent is desperate to cool off Bytedance's momentum in the short video space; Douyin is enemy number one. Tencent launched a clone but added a feature which allowed viewers to record a side-by-side video reaction in response to any video. It took about half a second for Bytedance to incorporate that into Douyin, and now it's a popular feature in TikTok the world over. If you can't change the proof of work competition as a challenger, copy and throttle is an effective strategy for the incumbent.

Not to mention that a wholesale ripoff of another app tends to be frowned upon as poor form. Even in China, with its reputation as the land of loose IP protection, users will tend to post dismissive reviews of blatant copycat apps in app stores. Chinese users may not be as aware of American apps that are knocked off in China, but within China, users don't just jump ship to out-and-out copycat apps. There has to be an incentive to overcome the switching costs, and that applies in China as it does elsewhere.

A few specifics of note here. I once wrote about social networks that the network's the thing; that is, the composition of the graph once a social network reaches scale is its most unique quality. I would update that today to say that it’s the unique combination of a feature and a specific graph that is any network’s most critical competitive advantage. Copying some network's feature often isn’t sufficient if you can’t also copy its graph, but if you can apply the feature to some unique graph that you earned some other way, it can be a defensible advantage.

Nothing illustrates this better than Facebook's attempts to win back the young from Snapchat by copying some of the network's ephemeral messaging features, or Facebook's attempt to copy TikTok with Lasso, or, well Facebook's attempt to duplicate just about every social app with any traction anywhere. The problem with copying Snapchat is that, well, the reason young people left Facebook for Snapchat was in large part because their parents had invaded Facebook. You don't leave a party with your classmates to go back to one your parents are throwing just because your dad brings in a keg and offer to play beer pong.

The pairing of Facebook's gigantic graph with just about almost any proof of work from another app changes the very nature of that status game, sometimes in undesirable ways. Do you really want your coworkers and business colleagues and family and friends watching you lip synch to "It's Getting Hot in Here" by Nelly on Lasso? Facebook was rumored to be contemplating a special memes tab to try to woo back the young, which, again, completely misunderstands how the young play the meme status game. At last check that plan had been shelved.

Of course, the canonical Facebook feature grab that pundits often cite as having worked is Instagram's copy of Snapchat's Stories format. As I've written before, I think the Stories format is a genuine innovation on the social modesty problem of social networks. That is, all but the most egregious showoffs feel squeamish about publishing too much to their followers. Stories, by putting the onus on the viewer to pull that content, allows everyone to publish away guilt-free, without regard for the craft that regular posts demand in the ever escalating game that is life publishing. In a world where algorithmic feeds break up your sequence of posts, Stories also allow gifted creators to create sequential narratives.

Thus Stories is inherently about lowering the publishing hurdle for users and about a new method of storytelling, and any multi-sided network seeing declining growth will try grafting it on their own network at some point just to see if it solves supply-side social modesty.

Ironically, as services add more and more filters and capabilities into their story functionality, we see the proof of work game in Stories escalating. Many of the Instagram Stories today are more elaborate and time-consuming to publish than regular posts; the variety of filters and stickers and GIFs and other tools in the Stories composer dwarfs the limited filters available for regular Instagram posts. What began as a lighter weight posting format is now a more sophisticated and complex one.

You can take the monkey out of the status-seeking game, but you can't take the status-seeking out of the monkey.

The Greatest Social Capital Creation Event in Tech History

In the annals of tech, and perhaps the world, the event that created the greatest social capital boom in history was the launch of Facebook's News Feed.

Before News Feed, if you were on, say MySpace, or even on a Facebook before News Feed launched, you had to browse around to find all the activity in your network. Only a demographic of a particular age will recall having to click from one profile to another on MySpace while stalking one’s friends. It almost seems comical in hindsight, that we'd impose such a heavy UI burden on social media users. Can you imagine if, to see all the new photos posted in your Instagram network, you had to click through each profile one by one to see if they’d posted any new photos? I feel like my parents talking about how they had to walk miles to grade school through winter snow wearing moccasins of tree bark when I complain about the undue burden of social media browsing before the News Feed, but it truly was a monumental pain in the ass.

By merging all updates from all the accounts you followed into a single continuous surface and having that serve as the default screen, Facebook News Feed simultaneously increased the efficiency of distribution of new posts and pitted all such posts against each other in what was effectively a single giant attention arena, complete with live updating scoreboards on each post. It was as if the panopticon inverted itself overnight, as if a giant spotlight turned on and suddenly all of us performing on Facebook for approval realized we were all in the same auditorium, on one large, connected infinite stage, singing karaoke to the same audience at the same time.

It's difficult to overstate what a momentous sea change it was for hundreds of millions, and eventually billions, of humans who had grown up competing for status in small tribes, to suddenly be dropped into a talent show competing against EVERY PERSON THEY HAD EVER MET.

Predictably, everything exploded. The number of posts increased. The engagement with said posts increased. This is the scene in a movie in which, having launched something, a bunch of people stand in a large open war room waiting, and suddenly a geek staring at a computer goes wide-eyed, exclaiming, "Oh my god." And then the senior ranking officer in the room (probably played by a scowling Ed Harris or Kyle Chandler) walks over to look at the screen, where some visible counter is incrementing so rapidly that the absolute number of digits starts is incrementing in real time as you look at it, because films have to make a plot development like this brain dead obvious to the audience. And then the room erupts in cheers while different people hug and clap each others on the back, and one random extra sprints across the screen in the background, shaking a bottle of champagne that explodes and ejaculates a stream of frothy bubbly through the air like some capitalist money shot that inspires, later, a 2,000 word essay from Žižek.

Of course, users complained about News Feed at first, but their behavior belied their words, something that would come to haunt Facebook later when it took it as proof that users would always just cry wolf and that similar changes in the future would be the right move regardless of public objections.

Back in those more halcyon times, though, News Feed unleashed a gold rush for social capital accumulation. Wow, that post over there has ten times the likes that my latest does! Okay, what can I learn from it to use in my next post? Which of my content is driving the most likes? We talk about the miracles of machine learning in the modern age, but as social creatures, humans are no less remarkable in their ability to decipher and internalize what plays well to the peanut gallery.

Stories of teens A/B testing Instagram posts, yanking those which don't earn enough likes in the first hour, are almost beyond satire; a show like Black Mirror often just resorts to episodes that show things that have already happened in reality. The key component of the 10,000 hour rule of expertise is the idea of deliberate practice, the type that provides immediate feedback. Social media may not be literally real-time in its feedback, but it's close enough, and the scope of reach is magnitudes of order beyond that of any social performance arena in history. We have a generation now that has been trained through hundreds of thousands, perhaps millions of social media reps on what engages people on which platforms. In our own way, we are all Buzzfeed. We are all Kardashians.

The tighter the feedback loop, the quicker the adaptation. Compare early Twitter to modern Twitter; it's like going from listening to your coworkers at a karaoke bar to watching Beyonce play Coachella. I wrote once that any Twitter account that gained enough followers would end up sounding like a fortune cookie, but I underestimated how quickly everyone would arrive at that end state.

As people start following more and more accounts on a social network, they reach a point where the number of candidate stories exceeds their capacity to see them all. Even before that point, the sheer signal-to-noise ratio may decline to the point that it affects engagement. Almost any network that hits this inflection point turns to the same solution: an algorithmic feed.

Remember, status derives value from some type of scarcity. What is the one fundamental scarcity in the age of abundance? User attention. The launch of an algorithmic feed raises the stakes of the social media game. Even if someone follows you, they might no longer see every one of your posts. As DiCaprio said in Django Unchained, “You had my curiosity, but now, under the algorithmic feed, you have to earn my attention.”

As humans, we intuitively understand that some galling percentage of our happiness with our own status is relative. What matters is less our absolute status than how are we doing compared to those around us. By taking the scope of our status competitions virtual, we scaled them up in a way that we weren't entirely prepared for. Is it any surprise that seeing other people signaling so hard about how wonderful their lives are decreases our happiness?

As evidence of how anomalous a change this has been for humanity, witness how many celebrities continue to be caught with a history of offensive social media posts that should obviously have been taken down long ago given shifting sensibilities? Kevin Hart, baseball players like Josh Hader, Trea Turner, and Sean Newcomb, and a litany of other public figures and their management teams didn't think to go back and scrub some of their earlier social media posts despite nothing but downside optionality.

Could social networks have chosen to keep likes and other such metrics about posts private, visible only to the recipient? Could we have kept this social capital arms race from escalating? Some tech CEO's now look back and, like Alan Greenspan, bemoan the irrational exuberance that led us to where we are now, but let's be honest, the incentives to lower interest rates on social capital in all these networks, given their goals and those of their investors, were just too great. If one company hadn’t flooded the market with status, others would have filled the void many times over.

A social network like Path attempted to limit your social graph size to the Dunbar number, capping your social capital accumulation potential and capping the distribution of your posts. The exchange, they hoped, was some greater transparency, more genuine self-expression. The anti-Facebook. Unfortunately, as social capital theory might predict, Path did indeed succeed in becoming the anti-Facebook: a network without enough users. Some businesses work best at scale, and if you believe that people want to accumulate social capital as efficiently as possible, putting a bound on how much they can earn is a challenging business model, as dark as that may be.

Why Social Capital Accumulation Skews Young

I'd love to see a graph of social capital assets under management by user demographic. I'd wager that we'd see that young people, especially those from their teens, when kids seem to be given their first cell phones, through early 20's, are those who dominate the game. My nephew can post a photo of his elbow on Instagram and accumulate a couple hundred likes; I could share a photo of myself in a conga line with Barack Obama and Beyonce while Jennifer Lawrence sits on my shoulders pouring Cristal over my head and still only muster a fraction of the likes my nephew does posting a photo of his elbow. It's a young person's game, and the Livejournal/Blogger/Flickr/Friendster/MySpace era in which I came of age feels like the precambrian era of social in comparison.

While we're all status-seeking monkeys, young people tend to be the tip of the spear when it comes to catapulting new Status as a Service businesses, and may always will be. A brief aside here on why this tends to hold.

One is that older people tend to have built up more stores of social capital. A job title, a spouse, maybe children, often a house or some piece of real estate, maybe a car, furniture that doesn't require you to assemble it on your own, a curriculum vitae, one or more college degrees, and so on.

[This differs by culture, of course. In the U.S., where I grew up, one’s job is the single most important status carrier which is why so many conversations there begin with “What do you do?”]

Young people are generally social capital poor unless they've lucked into a fat inheritance. They have no job title, they may not have finished college, they own few assets like homes and cars, and often if they've finished college they're saddled with substantial school debt. For them, the fastest and most efficient path to gaining social capital, while they wait to level up enough to win at more grown-up games like office politics, is to ply their trade on social media (or video games, but that’s a topic for another day).

Secondly, because of their previously accumulated social capital, adults tend to have more efficient means of accumulating even more status than playing around online. Maintenance of existing social capital stores is often a more efficient use of time than fighting to earn more on a new social network given the ease of just earning interest on your sizeable status reserves. That's just math, especially once you factor in loss aversion.

Young people look at so many of the status games of older folks—what brand of car is parked in your garage, what neighborhood can you afford to live in, how many levels below CEO are you in your org—and then look at apps like Vine and Musical.ly, and they choose the only real viable and thus optimal path before them. Remember the second tenet: people maximize their social capital the most efficient way possible. Both the young and old pursue optimal strategies.

That so much social capital for the young comes in the form of followers, likes, and comments from peers and strangers shouldn't lessen its value. Think back to your teen years and try to recall any real social capital that you could accumulate on such a scale. In your youth, the approval of peers and others in your demographic tend to matter more than just about anything, and social media has extended the reach of the youth status game in just about every direction possible.

Furthermore, old people tend to be hesitant about mastering new skills in general, including new status games, especially if they involve bewildering new technology. There are many reasons, including having to worry about raising children and other such adult responsibilities and just plain old decay in neural malleability. Perhaps old dogs don't learn new tricks because they are closer to death, and the period to earn a positive return on that investment is shorter. At some point, it's not worth learning any new tricks at all, and we all turn into the brusque old lady in every TV show, e.g. Maggie Smith in Downton Abbey, dropping withering quips about the follies of humanity all about us. I look forward to this period of my life when, through the unavoidable spectre of mortality, I will naturally settle into my DGAF phase of courageous truth-telling.

Lastly, young people have a surplus of something which most adults always complain they have too little of: time. The hurdle rate on the time of the young is low, and so they can afford to spend some of that surplus exploring new social networks, mining them to see if the social capital returns are attractive, whereas most adults can afford to wait until a network has runaway product-market fit to jump in. The young respond to all the status games of the world with a consistent refrain: "If you are looking for ransom I can tell you I don't have money, but what I do have are a very particular set of skills. Among those are the dexterity and coordination to lip synch to songs while dancing Blocboy JB's Shoot in my bedroom, and the time to do it over and over again until I nail it" (I wrote this long before recent events in which Liam Neeson lit much of his social capital on fire, vacating the “wronged and vengeful father with incredible combat and firearms skills” role to the next aging male star).

These modern forms of social capital are like new money. Not surprisingly, then, older folks, who are worse at accumulating these new badges than the young, often scoff at those kids wasting time on those apps, just as old money from the Upper West and Upper East Sides of New York look down their noses at those hoodie-wearing new money billionaire philistines of Silicon Valley.

The exception might be those who grew up in this first golden age of social media. For some of this generation’s younger NBA players, who were on Instagram from the time they got their first phone, posting may be second nature, a force of habit they bring with them into the league. Witness how many young NBA stars track their own appearances on House of Highlights the way stars of old hoped looked for themselves on Sportscenter.

If this generational divide on social media between the old and the young was simply a one-time anomaly given the recent birth of social networks, and if future generations will be virtual status-seeking experts for womb to tomb, then capturing users in their formative social media years becomes even more critical for social networks.

“I contain multitudes” (said the youngblood)

Incidentally, teens and twenty-somethings, more so than the middle-aged and elderly, tend to juggle more identities. In middle and high school, kids have to maintain an identity among classmates at school, then another identity at home with family. Twenty-somethings craft one identity among coworkers during the day, then another among their friends outside of work. Often those spheres have differing status games, and there is some penalty to merging those identities. Anyone who has ever sent a text meant for their schoolmates to their parents, or emailed a boss or coworker something meant for their happy hour crew knows the treacherous nature of context collapse.

Add to that this younger generation's preference for and facility with visual communication and it's clearly why the preferred social network of the young is Instagram and the preferred messenger Snapchat, both preferable to Facebook. Instagram because of the ease of creating multiple accounts to match one's portfolio of identities, Snapchat for its best in class ease of visual messaging privately to particular recipients. The expiration of content, whether explicitly executed on Instagram (you can easily kill off a meme account after you've outgrown it, for example), or automatically handled on a service like Snapchat, is a must-have feature for those for whom multiple identity management is a fact of life.

Facebook, with its explicit attachment to the real world graph and its enforcement of a single public identity, is just a poor structural fit for the more complex social capital requirements of the young.

Common Social Network Arcs

It's useful to look at some of the common paths that social networks traverse over time using our two axis model. Not all of them took the same paths to prominence. Doing so also helps illuminate the most productive strategies for each to pursue future growth.

First utility, then social capital

Come for the tool, stay for the network

This is the well-known “come for the tool, stay for the network” path. Instagram is a good example here given its growth from filter-driven utility to social photo sharing behemoth. Today, I can't remember the last time I used an Instagram filter.

In the end, I think most social networks, if they've made this journey, need to make a return to utility to be truly durable. Commerce is just one area where Instagram can add more utility for its users.

First social capital, then utility

Lots of the internet’s great resources were built off people seeking a hit of fame and recognition

Come for the fame, stay for the tool?

Foursquare was this for me. In the beginning, I checked in to try to win mayorships at random places. These days, Foursquare is trying to become more of a utility, with information on places around you, rather than just a quirky distributed social capital game. Heavier users may have thoughts on how successful that has been, but in just compiling a database of locations that other apps can build off of, they have built up a store of utility.

IMDb, Wikipedia, Reddit, and Quora are more prominent examples here. Users come for the status, and help to build a tool for the commons.

Utility, but no social capital

Plenty of huge social apps are almost entirely utilitarian, but it’s a brutally competitive quadrant

Some companies manage to create utility for a network but never succeed at building any real social capital of note (or don’t even bother to try).

Most messaging apps fall into this category. They help me to reach people I already know, but they don't introduce me to too many new people, and they aren't really status games with likes and follows. Skype, Zoom, FaceTime, Google Hangouts, Viber, and Marco Polo are examples of video chat apps that fit this category as well. While some messaging apps are trying to add features like Stories that start to veer into the more performative realm of traditional social media, I’m skeptical they’ll ever see traction doing so when compared to apps that are more pure Status as a Service apps like Instagram.

This bottom right quadrant is home to some businesses with over a billion users, but in minimizing social capital and competing purely on utility-derived network effects, this tends to be a brutally competitive battleground where even the slimmest moat is fought for with blood and sweat, especially in the digital world where useful features are trivial to copy.

Social capital, but little utility

When a social network loses heat before it has built utility, the fall can come as quickly as the rise

One could argue Foursquare actually lands here, but the most interesting company to debate in this quadrant is clearly Facebook. I'm not arguing that Facebook doesn't have utility, because clearly it does in some obvious ways. In some markets, it is the internet. Messenger is clearly a useful messaging utility for a over a billion people.

However, the U.S. is a critical market for Facebook, especially when it comes to monetization, and so it's worth wondering how things might differ for Facebook today if it had succeeded in pushing further out on the utility axis. Many people I know have just dropped Facebook from their lives this past year with little impact on their day-to-day lives. Among the obvious and largest utility categories, like commerce or payments, Facebook isn't a top tier player in any except advertising.

This comparison is especially stark if we compare it to the social network to which it's most often contrasted.

Both social capital and utility simultaneously

The holy grail for social networks is to generate so much social capital and utility that it ends up in that desirable upper right quadrant of the 2x2 matrix. Most social networks will offer some mix of both, but none more so than WeChat.

While I hear of people abandoning Facebook and never looking back, I can't think of anyone in China who has just gone cold turkey on WeChat. It's testament to how much of an embedded utility WeChat has become that to delete it would be a massive inconvenience for most citizens.

Just look at the list of services in the WeChat or WePay or AliPay menu for the typical Chinese user and consider that Facebook isn’t a payment option for any of them.

Of course, the competitive context matters. Facebook faced much stiffer competition in these categories than WeChat did; for Facebook to build a better mousetrap in any of these, the requirements were much higher than for WeChat.

Take payments for example. The Chinese largely skipped credit cards, for a whole host of reasons. In part it was due to a cultural aversion to debt, in part because Visa, Mastercard, and American Express weren’t allowed into China where they would certainly have marketed their cards as aggressively as they always do. That meant Alipay and WePay launched competing primarily with cash and all its familiar inconveniences. Compare that to, say, Apple Pay trying to displace the habit of pulling out a credit card in the U.S., especially given how so many people are addicted to credit card points and miles (airline frequent flier programs being another testament to the power of status to influence people’s decision-making).

Making a real dent in new categories like commerce and payments will require a long-term mindset and a ton of resources on the part of Facebook and its subsidiaries like WhatsApp and Instagram. Past efforts to, for example, improve Facebook search, position Facebook as payment option, and introduce virtual assistants on Messenger seem to have been abandoned. Will new efforts like Facebook's cryptocurrency effort or Instagram's push into commerce be given a sufficiently long leash?

Social Network Asymptote 1: Proof of Work

How do you tell when a Status as a Service business will stop growing? What causes networks to suddenly hit that dreaded upper shoulder in the S-curve if, according to Metcalfe's Law, the value of a network grows in proportion to the square of its users? What are the missing variables that explain why networks don’t keep growing until they’ve captured everyone?

The reasons are numerous, let’s focus on social capital theory. To return to our cryptocurrency analogy, the choice of your proof of work is by definition an asymptote because the skills it selects for are not evenly distributed.

To take a specific example, since it's the app du jour, let's look at the app formerly known as Musical.ly, TikTok.

You've probably watched a TikTok video, but have you tried to make one? My guess is that many of you have not and never will (but if you have, please send me a link). This is no judgment, I haven’t either.

You may possess, in your estimation, too much self-dignity to wallow in cringe. Your arthritic joints may not be capable of executing Orange Justice. Whatever the reason, TikTok's creator community is ultimately capped by the nature of its proof of work, no matter how ingenious its creative tools. The same is true of Twitter: the number of people who enjoy crafting witty 140 and now 280-character info nuggets is finite. Every network has some ceiling on its ultimate number of contributors, and it is often a direct function of its proof of work.

Of course, the value and total user size of a network is not just a direct function of its contributor count. Whether you believe in the 1/9/90 rule of social networks or not, it’s directionally true that any network has value to people besides its creators. In fact, for almost every network, the number of lurkers far exceeds the number of active participants. Life may not be a spectator sport, but a lot of social media is.

This isn’t to say that proof of work is bad. In fact, coming up with a constraint that unlocks the creativity of so many people is exactly how Status as a Service businesses achieve product-market fit. Constraints force the type of compression that often begets artistic elegance, and forcing creatives to grapple with a constraint can foster the type of focused exertion that totally unconstrained exploration fails to inspire.

Still, a ceiling is a ceiling. If you want to know the terminal value of a network, the type of proof of work is a key variable to consider. If you want to know why Musical.ly stopped growing and sold to Bytedance, why Douyin will hit a ceiling of users in China (if it hasn’t already), or what the cap of active users is for any social network, first ask yourself how many people have the skill and interest to compete in that arena.

Social Network Asymptote 2: Social Capital Inflation and Devaluation

More terrifying to investors and employees than an asymptote is collapse. Recall the cautionary myth of the fall of Myspace, named after the little known Greek god of vanity Myspakos (Editor’s note: I made that up, it’s actually Narcissus). Why do some social networks, given Metcalfe's Law and its related network effects theories, not only stop growing but even worse, contract and wither away?

To understand the inherent fragility in Status as a Service businesses, we need to understand the volatility of status.

Social Capital Interest Rate Hikes

One of the common traps is the winner's curse for social media. If a social network achieves enough success, it grows to a size that requires the imposition of an algorithmic feed in order to maintain high signal-to-noise for most of its users. It's akin to the Fed trying to manage inflation by raising interest rates.

The problem, of course, is that this now diminishes the distribution of any single post from any single user. One of the most controversial of such decisions was Facebook's change to dampen how much content from Pages would be distributed into the News Feed.

Many institutions, especially news outlets, had turned to Facebook to access some sweet sweet eyeball inventory in News Feeds. They devised all sorts of giveaways and promotions to entice people to follow their Facebook Pages. After gaining followers, a media company had a free license to publish and publish often into their News Feeds, an attractive proposition considering users were opening Facebook multiples times per day. For media companies, who were already struggling to grapple with all the chaos the internet had unleashed on their business models, this felt like upgrading from waving stories at passersby on the street to stapling stories to the inside of eyelids the world over, several times a day. Deterministic, guaranteed eyeballs.

Then, one day, Facebook snapped its fingers like Thanos and much of that dependable reach evaporated into ash. No longer would every one of your Page followers see every one of your posts. Facebook did what central banks do to combat inflation and raised interest rates on borrowing attention from the News Feed.

Was such a move inevitable? Not necessarily, but it was always likely. That’s because there is one scarce resource which is a natural limit on every social network and media company today, and that is user attention. That a social network shares some of that attention with its partners will always be secondary to accumulating and retaining that attention in the first place. Facebook, for example, must always guard against the tragedy of the commons when it comes to News Feed. Saving media institutions is a secondary consideration, if that.

Social Capital Deflation: Scarcity Precarity or the Groucho Marx Conundrum

Another existential risk that is somewhat unique to social networks is this: network effects are powerful, but ones which are social in nature have the unfortunate quality of being just as ferocious in reverse.

In High Growth Handbook by Elad Gil, Marc Andreessen notes:

I think network effects are great, but in a sense they’re a little overrated. The problem with network effects is they unwind just as fast. And so they’re great while they last, but when they reverse, they reverse viciously. Go ask the MySpace guys how their network effect is going. Network effects can create a very strong position, for obvious reasons. But in another sense, it’s a very weak position to be in. Because if it cracks, you just unravel. I always worry when a company thinks the answer is just network effects. How durable are they?

Why do social network effects reverse? Utility, the other axis by which I judge social networks, tends to be uncapped in value. It's rare to describe a product or service as having become too useful. That is, it's hard to over-serve on utility. The more people that accept a form of payment, the more useful it is, like Visa or Mastercard or Alipay. People don’t stop using a service because it’s too useful.

Social network effects are different. If you've lived in New York City, you've likely seen, over and over, night clubs which are so hot for months suddenly go out of business just a short while later. Many types of social capital have qualities which render them fragile. Status relies on coordinated consensus to define the scarcity that determines its value. Consensus can shift in an instant. Recall the friend in Swingers, who, at every crowded LA party, quips, "This place is dead anyway." Or recall the wise words of noted sociologist Groucho Marx: "I don't care to belong to any club that will have me as a member."

The Groucho Marx effect doesn't take effect immediately. In the beginning, a status hierarchy requires lower status people to join so that the higher status people have a sense of just how far above the masses they reside. It's silly to order bottle service at Hakkasan in Las Vegas if no one is sitting on the opposite side of the velvet ropes; a leaderboard with just a single high score is meaningless.

However, there is some tipping point of popularity beyond which a restaurant, club, or social network can lose its cool. When Malcolm Gladwell inserted the term "tipping point" into popular vernacular, he didn't specify which way things were tipping. We tend to glamorize the tipping into rapid diffusion, the toe of the S-curve, but in status games like fashion the arc of popularity traces not an S-curve but a bell curve. At the top of that bell curve, you reach the less glamorous tipping point, the one before the plummet.

When the definition of status is distributed, often one minority has disproportionate sway. If that group, the cool kids, pulls the ripcord, everyone tends to follow them to the exits. In fact, it’s usually the most high status or desirable people who leave first, the evaporative cooling effect of social networks. At that point, that product or service better have moved as far out as possible on the utility axis or the velocity of churn can cause a nose bleed.

[Mimetic desire is a cruel mistress. Girard would've had a field day with the Fyre Festival. Congratulations Billy McFarland, you are the ritual sacrifice with which we cleanse ourselves of the sin of coveting thy influencer’s bounty.]

Fashion is one of the most interesting industries for having understood this recurring boom and bust pattern in network effects and taken ownership of its own status devaluation cycles. Some strange cabal of magazine editors and fashion designers decide each season to declare arbitrarily new styles the fashion of the moment, retiring previous recommendations before they grow stale. There is usually no real utility change at all; functionally, the shirt you buy this season doesn’t do anything the shirt you bought last season still can’t do equally well. The industry as a whole is simply pulling the frontier of scarcity forward like a wave we're all trying to surf.

This season, the color of the moment might be saffron. Why? Because someone cooler than me said so. Tech tends to prioritize growth at all costs given the non-rival, zero marginal cost qualities of digital information. In a world of abundance, that makes sense. However, technology still has much to learn from industries like fashion about how to proactively manage scarcity, which is important when goods are rivalrous. Since many types of status are relative, it is, by definition, rivalrous. There is some equivalent of crop rotation theory which applies to social networks, but it's not part of the standard tech playbook yet.

A variant of this type of status devaluation cascade can be triggered when a particular group joins up. This is because the stability of a status lattice depends just as much on the composition of the network as its total size. A canonical example in tech was the youth migration out of Facebook when their parents signed on in force. Because of the incredible efficiency of News Feed distribution, Facebook became a de facto surveillance apparatus for the young: Mommy and Daddy are watching, as well as future universities and employers and dates who will time travel back and scour your profile someday. As Facebook became less attractive as a platform for the young, many of them flocked to Snapchat as their new messaging solution, its ephemeral nature offering built-in security and its UX opacity acting as a gate against clueless seniors.

I've written before about Snapchat's famously opaque Easter Egg UI as a sort of tamper-proof lid for parents, but if we combine social network utility theory with my post on selfies as a second language, it's also clear that Snapchat is a suboptimal messaging platform for older people whose preferred medium of communication remains text. Snapchat opens to a camera. If you want to text someone, it's extra work to swipe to the left pane to reach the text messaging screen.

I would be shocked if Facebook did not, at one point, contemplate a version of its app that opened to the camera first, instead of the News Feed, considering how many odd clones of other apps it’s considered in the past. If so, it’s good they never shipped it, because for young people, publishing to a graph that still contained their parents would've still been prohibitive, while for old folks who aren't as biased towards visual mediums, such a UI would've been suboptimal. It would've been a disastrous lose-lose for Facebook.

Patrick Collison linked to an interesting paper (PDF) on network effects traps in the physical world. They exist in the virtual world as well, and Status as a Service businesses are particularly fraught with them. Another instance is path dependent user composition. A fervent early adopter group can define who a new social network seems to be for, merely by flooding the service with content they love. Before concerted efforts to personalize the front page more quickly, Pinterest seemed like a service targeted mostly towards women even though its basic toolset are useful to many men as well. Because a new user’s front page usually drew upon pins from their friends already on the service, the earliest cohorts, which leaned female, dominated most new user’s feeds. My earliest Pinterest homepage was an endless collage of makeup, women’s clothing, and home decor because those happened to be some of the things my friends were pinning for a variety of projects.

Groucho Marx was ahead of his time as a social capital philosopher, but we can build upon his work. To his famous aphorism we should add some variants. When it comes to evaporative cooling, two come to mind: “I don’t want to belong to any club that will have those people as a member” and “I don’t want to belong to any club that those people don’t want to be a member of.”

Mitigating Social Capital Devaluation Risk, and the Snapchat Strategy

In a leaked memo late last year, Evan Spiegel wrote about how one of the core values of Snapchat is to make it the fastest way to communicate.

The most durable way for us to grow is by relentlessly focusing on being the fastest way to communicate.

Recently I had the opportunity to use Snapchat v5.0 on an iPhone 4. It had much of Bobby's original code in many of my original graphics. It was way faster than the current version of Snapchat running on my iPhone X.

In our excitement to innovate and bring many new products into the world, we have lost the core of what made Snapchat the fastest way to communicate.

In 2019, we will refocus our company on making Snapchat the fastest way to communicate so that we can unlock the core value of our service for the billions of people who have not yet learned how to use Snapchat. If we aren't able to unlock the core value of Snapchat, we won't ever be able to unlock the full power of our camera.

This will require us to change the way that we work and put our core product value of being the fastest way to communicate at the forefront of everything we do at Snap. It might require us to change our products for different markets where some of our value-add features detract from our core product value.

This clarifies Snapchat's strategy on the 3 axes of my social media framework: Snapchat intends to push out further on the utility axis at the expense of the social capital axis which, as we’ve noted before, is volatile ground to build a long-term business on.

Many will say, especially Snapchat itself, that it has been the anti-Facebook all along. Because it has no likes, it liberates people from destructive status games. To believe that is to underestimate the ingenuity of humanity in its ability to weaponize any network for status games.

Anyone who has studied kids using Snapchat know that it's just as integral a part of high school status and FOMO wars as Facebook, and arguably more so now that those kids largely don’t use Facebook. The only other social media app that is as sharp a stick is Instagram which has, it’s true, more overt social capital accumulation mechanisms. Still, the idea that kids use Snapchat like some pure messaging utility is laughable and makes me wonder if people have forgotten what teenage school life was like. Whether you see people attend a party that you’re not invited to on Instagram or on someone’s Snap, you still feel terrible.

Remember Snapchat's original Best Friends list? I'm going to guess many of my readers don't, because, as noted earlier, old people probably didn't play that status game, if they'd even figured out how to use Snapchat by that point. This was just about as pure a status game feature as could be engineered for teens. Not only did it show the top three people you Snapped with most frequently, you could look at who the top three best friends were for any of your contacts. Essentially, it made the hierarchy of everyone's “friendships” public, making the popularity scoreboard explicit.

I’m glad this didn’t exist when I was in high school, I really didn’t need metrics on how much of a loser I was

I’m glad this didn’t exist when I was in high school, I really didn’t need metrics on how much of a loser I was

You don’t want to know what the proof of work is to achieve Super BFF-dom

As with aggregate follower counts and likes, the Best Friends list was a mechanism for people to accumulate a very specific form of social capital. From a platform perspective, however, there's a big problem with this feature: each user could only have one best friend. It put an artificial ceiling on the amount of social capital one could compete for and accumulate.

In a clever move to unbound social capital accumulation and to turn a zero-sum game into a positive sum game, broadening the number of users working hard or engaging, Snapchat deprecated the very popular Best Friends list and replaced it with streaks.

If you’ve never seen those numbers and emojis on the right of your Snapchat contacts list, no one loves you. Just kidding, it just means you’re old.

If you and a friend Snap back and forth for consecutive days, you build up a streak which is tracked in your friends list. Young people quickly threw their heart and souls into building and maintaining streaks with their friends. This was literally proof of work as proof of friendship, quantified and tracked.

Streaks, of course, have the wonderful quality of being unbounded. You can maintain as many streaks as you like. If you don't think social capital has value, you've never seen, as I have, a young person sobbing over having to go on vacation without their phone, or to somewhere without cell or wifi access, only to see all their streaks broken. Some kids have resorted, when forced to go abroad on a vacation, to leaving their phone with a friend who helps to keep all the streaks alive, like some sort of social capital babysitter or surrogate.

What's hilarious is how efficiently young people maintain streaks. It's a daily ritual that often consists of just quickly running down your friend list and snapping something random, anything, just to increment the streak count. My nephew often didn’t even bother framing the camera up, most his streak-maintenance snaps were blurry pics of the side of his elbow, half his shoulder, things like that.

Of course, as evidence of the fragility of social capital structures, streaks have started to lose heat. Many younger users of Snapchat no longer bother with them. Maintaining social capital games is always going to be a volatile game, prone to sudden and massive deflationary events, but while they work, they’re a hell of a drug. They also can be useful; for someone Snapping frequently, like all teens do, having a best friends list sorted to the top of your distribution list is a huge time-saver. Social capital and utility often can’t be separated cleanly.

Still, given the precarious nature of status, and given the existence of Instagram which has always been a more unabashed social capital accumulation service, it’s not a bad strategy for Snapchat to push out towards increased utility in messaging instead. The challenge, as anyone competing in the messaging space knows, is that creating any durable utility advantage is brutally hard. In the game theory of tech competition, it's best to assume that any feature that can be copied will. And messaging may never be, from a profit perspective, the most lucrative of businesses.

As a footnote, Snapchat is also playing on the entertainment axis with its Discover pane. Almost all social networks of some scale will play with some mix of social capital, utility, and entertainment, but each chooses how much to emphasize each dimension.

Lengthening the Half-life of Status Games

The danger of having a proof of work burden that doesn't change is that eventually, everyone who wants to mine for that social currency will have done so, and most of it will be depleted. At that point, the amount of status-driven potential energy left in the social network flattens. If, at that inflection, the service hasn't made headway in adding a lot of utility, the network can go stale.

One way to combat this, which the largest social networks tend to do better than others, is add new forms of proof of work which effectively create a new reserve of potential social capital for users to chase. Instagram began with square photos and filters; it's since removed the aspect ratio constraint, added video, lengthened video limits, and added formats like Boomerang and Stories. Its parent company, Facebook, arguably has broadened the most of any social network in the world, going from a text-based status update tool for a bunch of Harvard students to a social network with so many formats and options that I can’t keep track of them all. These new hurdles are like downloadable content in video games, new levels to spice up a familiar game.

Doing so is a delicate balance, because it’s quite possible that Facebook is so many things to so many people that it isn't really anything to anyone anymore. It is hard to be a club that admits everyone but still wants to offer a coherent status ladder. You can argue Facebook doesn't want to be in the status game, but if so, it had better add a lot more utility.

Video games illuminate the proof of work cycle better than almost any category, it is the drosophila of this type of analysis given its rapid life cycle and overt skill-versus-reward tradeoffs. Why is it, for example, that big hit games tend to have a life cycle of about 18 months?

A new game offers a whole new set of levels and challenges, and players jump into the status competition with gusto. But, eventually, skill differentiation tends to sort the player base cleanly. Players rise to the level of their mastery and plateau. Simultaneously, players become overly familiar with the game's challenges; the dopamine hit of accomplishment dissipates.

A franchise like, say, Call of Duty, learns to manage this cycle by investing hundreds of millions of dollars to issue a new version of the game regularly. Each game offers familiarity but a new set of levels and challenges and environments. It's the circle of life.

Some games can lengthen the cycle. For example, casino games in Vegas pay real money to set an attractive floor on the ROI of playing. Some MMORPGs offer other benefits to players, like a sense of community, which last longer than the pure skill challenge of playing the game. Looking at some of the longer lasting video game franchises like World of Warcraft, League of Legends, and Fortnite reveal a lot about how a parallel industry has succeeded in lengthening the productive middle age of its top properties.

I suspect the frontier of social network strategy will draw more and more upon deep study of these adjacent and much older social capital games. Fashion, video games, religion, and society itself are some of the original Status as a Service businesses.

Why Some Companies Will Always Struggle with Social

Some people find status games distasteful. Despite this, everyone I know is engaged in multiple status games. Some people sneer at people hashtag spamming on Instagram, but then retweet praise on Twitter. Others roll their eyes at photo albums of expensive meals on Facebook but then submit research papers to prestigious journals in the hopes of being published. Parents show off photos of their children performances at recitals, people preen in the mirror while assessing their outfits, employees flex on their peers in meetings, entrepreneurs complain about 30 under 30 lists while wishing to be on them, reporters check the Techmeme leaderboards; life is nothing if not a nested series of status contests.

Have I met a few people in my life who are seemingly above all status games? Yes, but they are so few as to be something akin to miracles, and damn them for making the rest of us feel lousy over our vanity.

The number of people who claim to be above status games exceeds those who actually are. I believe their professed distaste to be genuine, but even if it isn't, the danger of their indignation is that they actually become blind to how their product functions in some ways as Status as a Service business.

Many of our tech giants, in fact, are probably always going to be weak at social absent executive turnover or smart acquisitions. Take Apple, which has actually tried before at building out social features. They built one in music, but it died off quickly. They've tried to add some social features to the photo album on iOS, though every time I've tried them out I end up more bewildered than anything else.

iMessages, Apple fans might proclaim! Hundreds of millions of users, a ton of usage among teens, isn't that proof that Apple can do social? Well, in a sense, but mostly one of utility. Apple's social efforts tend to be social capital barren.

Since Apple positions itself as the leading advocate for user privacy, it will always be constrained on building out social features since many of them trade off against privacy. Not all of them do, and it’s possible a social network based entirely on privacy can be successful, but 1) it would be challenging and 2) it's not clear many people mind trading off some privacy for showing off their best lives online.

This is, of course, exactly why many people love and choose Apple, and they have more cash than they can spend. No one need feel sorry for Apple, and as is often the case, a company’s strengths and weaknesses stem from the same quality in their nature. I’d rather Apple continue to focus on building the best computers in the world. Still, it’s a false tradeoff to regard Apple’s emphasis on privacy as an excuse for awkward interactions like photo sharing on iOS.

The same inherent social myopia applies to Google which famously took a crack at building a social network of its own with Google+. Like Apple, the team in Mountain View has always seemed more suited to building out networks of utility rather than social capital. Google is often spoken of as a company where software engineers have the most power. Engineers, in my experience, are driven by logic, and status-centered products are distasteful or mysterious to them, often both. Google will probably always be weak at social, but as with Apple, they compensate with unique strengths.

Oddly enough, despite controlling one of the two dominant mobile platforms, they have yet to be able to launch a successful messaging app. That’s about as utility-driven a social application as there is, akin to email where Google does have sizeable market share with GMail. It's a shame as Google could probably use social as an added layer of utility in many of their products, especially in Google Maps.

Amazon and Netflix both launched social efforts though they’ve largely been forgotten. It's likely the attempts were premature, pushed out into the world before either company had sufficient scale to enable positive flywheel effects. It’s hard enough launching a new social network, but it’s even harder to launch social features built around behaviors like shopping or renting DVD’s through the mail which occur infrequently. Neither company’s social efforts were the most elegantly designed, either (Facebook is underrated for its ability to launch a social product that scaled to billions of users, its design team has a mastery of maintaining ease of use for users of all cultures and ages).

Given the industrialization of fake reviews, and given how many people have Prime accounts, Amazon could build a social service simply to facilitate product recommendations and reviews from people you know and trust; I increasingly turn a skeptical eye to both extremely positive and negative reviews on Amazon, even if they are listed as coming from verified purchasers. The key value of a feature like this would be utility, but the status boost from being a product expert would be the energy turning the flywheel. The thing is, Amazon actually has a track record of harnessing social dynamics in service of its retail business with features like reviewer rankings and global sales rank (both are discussed a bit further down).

As for Netflix, I actually think social isn’t as useful as many would think in generating video recommendations (that’s a discussion for another day, but suffice it to say there is some narcissism of small differences when it comes to film taste). However, as an amplifier of Netflix as the modern water cooler, as a way to encourage herd behavior, social activity can serve as an added layer of buzz that for now is largely opaque to users inside Netflix apps. It's a strategy that is only viable if you can achieve the size of subscriber base that a Netflix has, and thus it is a form of secondary scale advantage that they could leverage more.

However, there's another reason that senior execs at most companies, even social networks, are ill-suited to designing and leveraging social features. It’s a variant of winner's curse.

Let Them Eat Cake

You'll hear it again and again, the easiest way to empathize with your users is to be the canonical user yourself. I tend to subscribe to this idea, which is unfortunate because it means I have hundreds of apps installed on my phone at any point in time, just trying to keep up with the product zeitgeist.

With social networks, one of the problems with seeing your own service through your users’ eyes is that every person has a different experience given who they follow and what the service's algorithm feeds them. When you have hundreds of millions or even billions of users, across different cultures, how do you accurately monitor what's going on? Your metrics may tell you that engagement is high and growing, but what is the composition of that activity, and who is exposed to what parts?

Until we have metrics that distinguish between healthy and unhealthy activity, social network execs largely have to steer by anecdote, by licking a finger and sticking it in the air to ascertain the direction of the wind. Some may find it hard to believe when execs plead ignorance when alerted of the scope of problems on their services, but I don't. When it comes to running a community, the thickest veil of ignorance is the tidy metrics dashboard that munges hundreds, thousands, or maybe even millions of cohorts into just a handful.

To really get the sense of a health of a social network, one must understand the topology of the network, and the volume and nature of connections and interactions among hundreds of millions or even billions of users. It’s impossible to process them all, but just as difficult today to summarize them without losing all sorts of critical detail.

But perhaps even more confounding is that executives at successful social networks are some of the highest status people in the world. Forget first world problems, they have .1% or .001% problems. On a day-to-day basis, they hardly face a single issue that their core users grapple with constantly. Engagement goals may drive them towards building services that are optimized as social capital games, but they themselves are hardly in need of more status, except of a type they won't find on their own networks.

[The one exception may be Jack Dorsey, as any tweet he posts now attracts an endless stream of angry replies. It’s hard to argue he doesn’t understand firsthand the downside risk of a public messaging protocol. Maybe, for victims of harassment on Twitter, we need a Jack that is less thick-skinned and stoic, not more.]

The Social Capital - Financial Capital Exchange

[If you fully believe in the existence and value of social capital, you can skip this section, though it may be of interest in understanding some ways to estimate its value.]

That some of the largest, most valuable companies in history have been built so quickly in part on creating status games should be enough to convince you of the existence and value of social capital. Since we live in the age of social media, we live in perhaps the peak of social capital assets in the history of civilization. However, as noted earlier, one of the challenges of studying it is that we don't have agreed-upon definitions of how to measure it and thus to track its flows.

I haven't found a clean definition of social capital but think of it as capital that derives from networks of people. If you want to explore the concept further, this page has a long list of definitions from literature. The fact is, I have deep faith in all my readers when it comes to social capital that, like Supreme Court Justice Potter Stewart once said about pornography, you "know it when you see it."

But more than that, the dark matter that is social capital can be detected through those exchanges in which it converts into more familiar stores of value.

If you've ever borrowed a cup of milk from your neighbor, or relied on them to watch your children for an afternoon, you know the value of social capital. If you lived in an early stage of human history, when people wandered in small nomadic tribes and regularly clubbed people of other tribes to death with sticks and stones, you also know the value of social capital through the protective cocoon of its presence and the sudden violence in its absence.

Perhaps the easiest way to spot social capital is to look at places where people trade it for financial capital. With the maturing of social networks, we've seen the infrastructure to facilitate these exchanges come a long way. These trades allow us to assign a tangible value to social capital the way one might understand the value of an intangible assets like leveled-up World of Warcraft characters when they are sold on the open market.

Perhaps the most oft-cited example of a social-to-financial-capital exchange is the type pulled off by influencers on Instagram and YouTube. I've met models who, in another life, might be mugging outside an Abercrombie and Fitch or working the front door at some high end restaurant in Los Angeles, but instead now pull down over 7 figures a year for posting photos of themselves luxuriating in specific resorts, wearing and using products from specific sponsors. When Jake or Logan Paul post a video of themselves preening in front of their new Lamborghini in the driveway of the mansion they bought using money stemming from their YouTube streaming, we know some exchange of social capital for financial capital has occurred upstream. Reshape distribution and you reshape the world.

Similarly, we see flows the other direction. People buying hundreds of thousands of followers on Twitter is one of the cleanest examples of trading financial capital for social capital. Later, that social capital can be converted back into financial capital any number of ways, including charging sponsors for posts. Depending on the relative value in both directions there can be arbitrage.

[Klout, a much-mocked company online, attempted to more precisely track social capital valuations of people online, but, just as the truly wealthy mock the nouveau riche as gauche, many found the explicit measurement attempts unseemly. Most of these same people, however, compete hard for social capital online, so ¯\(ツ)/¯. The designation of which status games are acceptable is itself a status game.]

Asia, where monetization models differ for a variety of cultural and contextual reasons, provides an even cleaner valuation of social capital. There, many social networks allow you to directly turn your social capital into financial capital, without leaving the network. For example, on live-streaming sites like YY, you can earn digital gifts from your viewers which cost actual money, the value of which you split with the platform. In the early days, a lot of YY consisted of cute girls singing pop songs. These days, as seen in the fascinating documentary People’s Republic of Desire, it has evolved into much more.

Agencies have sprung up in China to develop and manage influencers, almost like farm systems in baseball with player development and coaches. The speed at which social capital can be converted into your own branded product lines is accelerating by leaps and bounds, and nowhere more so than in China.

Meanwhile, on Twitter, if one of your tweets somehow goes massively viral, you still have to attach a follow-up tweet with a link to your GoFundMe page, a vulgar monetization hack in comparison. It’s China, not the U.S., that is the bleeding edge of influencer industrialization.

I'm skeptical that all of Asia's monetization schemes will export to the culture in America, but for this post, the important thing is that social capital has real financial value, and networks differ along the spectrum of how easily that exchange can be made.

Social Capital Accumulation and Storage

As with cryptocurrency, it's no use accumulating social capital if you can't take ownership of it and store it safely. Almost all successful social networks are adept at providing both accumulation and storage mechanisms.

It may sound obvious now, but consider the many apps and services that failed to provide something like this and saw all their value leak to other social networks. Hipstamatic came before Instagram and was the first photo filter app of note that I used on mobile. But, unlike Instagram, it charged for its filters and had no profile pages, social network, or feed. I used Hipstamatic filters to modify my iPhone photos and then posted them to other social networks like Facebook. Hipstamatic provided utility but captured none of the social capital that came from the use of its filters.

Contrast this with a company like Musical.ly, which I mentioned above. They came up with a unique proof of work burden, but unlike Hipstamatic, they wanted to capture the value of the social capital that its users would mine by creating their musical skits. They didn't want these skits to just be uploaded to Instagram or Facebook or other networks.

Therefore, they created a feed within the app, to give its best users distribution for their work. By doing so, Musical.ly owned that social capital it helped generate. If your service is free, the best alternative to capturing the value you create is to own the marketplace where that value is realized and exchanged.

Musical.ly founder Alex Zhu likens starting a new social network to founding a new country and trying to attract citizens from established countries. It's a fun analogy, though I prefer the cryptocurrency metaphor because most users are citizens of multiple social networks in the tech world, managing their social capital assets across all of those networks as a sort of diversified portfolio of status.

For the individual user, we've standardized on a few basic social capital accumulation mechanisms. There is the profile, to which your metrics attach, most notably your follower count and list. Followers or friends are the atomic unit of many social networks, and the advantage of followers as a measure is it generally tends to only grow over time. It also makes for an easy global ranking metric.

Local scoring of social capital at the atomic level usually exists in the form of likes of some sort, one of the universal primitives of just about every social network. These are more ephemeral in nature given the nature of feeds, which tend to prioritize distribution of more recent activity, but most social networks have some version of this since followers tend to accumulate more slowly. Likes correlate more strongly with your activity volume and serve as a source of continual short-term social capital injections, even if each like is, in the long-run, less valuable than a follower or a friend.

Some networks allow for accelerated distribution of posts through resharing, like retweeting (with many unintended consequences, but that's a discussion for another day). Some also allow comments, and there are other network-specific variants, but most of these are some form of social capital that can attach to posts.

Again, this isn't earth-shattering to most users of social networks. However, where it’s instructive is in examining those social networks which make such social capital accumulation difficult.

A good example is the anonymous social network, like Whisper or Secret. The premise of such social networks was that anonymity would enable users to share information and opinions they would otherwise be hesitant to be associated with. But, as is often the case, that strength turned out to be a weakness, because users couldn't really claim any of the social capital they'd created there. Many of the things written on these networks were so toxic that to claim ownership of them would be social capital negative in the aggregate.

A network like Reddit solved this through its implementation of karma, but it's fair to say that it's also been a long struggle for Reddit to suppress the dark asymmetric incentives unlocked by detaching social capital from real-life identity and reputation.

[Balaji Srinivasan once mentioned that someday the cryptocurrencies might allow someone to extract the value from an anonymous social network without revealing their identity publicly, but for now, at least, a lot of this status on social networks isn’t monetary in nature. A lot of it’s just for the lulz.]

For any single user, the stickiness of a social network often correlates strongly with the volume of social capital they've amassed on that network. People sometimes will wholesale abandon social networks, but it's rare unless the status earned there has undergone severe deflation.

Social capital does tend to be non-fungible which also tends to make it easier to abandon ship. If your Twitter followers aren't worth anything on another network, it's less painful to just walk away from the account if it isn't worth the trouble anymore. It's strange to think that social networks like Twitter and Facebook once allowed users to just wholesale export their graphs to other networks since it allowed competing networks to jumpstart their social capital assets in a massive way, but that only goes to show how even some of the largest social networks at the time underestimated the massive value of their social capital assets. Facebook also, at one point, seemed to overestimate the value of inbound social capital that they'd capture by allowing third party services and apps to build on top of their graph.

The restrictions on porting graphs is a positive from the perspective of the incumbent social networks, but from a user point-of-view, it's frustrating. Given the difficulty of grappling with social networks given the consumer welfare standard for antitrust, an option for curbing the power of massive network effects businesses is to require that users be allowed to take their graph with them to other networks (as many have suggested). This would blunt the power of social networks along the social capital axis and force them to compete more on utility and entertainment axes.

Social Capital Arbitrage

Because social networks often attract different audiences, and because the configuration of graphs even when there are overlapping users often differ, opportunities exist to arbitrage social capital across apps.

A prominent user of this tactic was @thefatjewish, the popular Instagram account (his real name was Josh Ostrovsky). He accumulated millions of followers on Instagram in large part by taking other people's jokes from Twitter and other social networks and then posting them as his own on Instagram. Not only did he rack up followers and likes by the millions, he even got signed with CAA!

When he got called on it, he claimed it wasn't what he was about. He said, "Again, Instagram is just part of a larger thing I do. I have an army of interns working out of the back of a nail salon in Queens. We have so much stuff going on: I'm writing a book, I've got rosé. I need them to bathe me. I've got so many other things that I need them to do. It just didn't seem like something that was extremely dire." Which is really a long, bizarre way of saying, you caught me. Let he who does not have an army of interns bathing them throw the first stone.

Since then, similar joke aggregator accounts on Instagram have continued to proliferate, but some of them now follow the post-fatjewish-scandal social norm of including the proper attribution for each joke in the photo (for example including the Twitter username and profile pic within the photo of the “borrowed” tweet). But many do not, and even for those who do, the most prominent can trigger a backlash. The hashtag #fuckfuckjerry is an emergent protest against the popular Instagram account @fuckjerry which, like @fatjewish, curates the best jokes from others and daytraded that into a small media company, one that featured in the Fyre Festival debacle.

As long as we have multiple social networks that don't quite work the same way, there will continue to be these social media arbitragers copying work from one network and to a different network to accumulate social capital on closing the distribution gap. Before the internet, men resorted to quoting movies or Mitch Hedberg jokes in conversation, to steal a bit of personality and wit from a more gifted comedian. This is the modern form of that, supercharged with internet-scale reach.

At some level, a huge swath of social media posts are just attempts to build status off of someone else's work. The two tenets at the start of this article predict that this type of arbitrage will always be with us. Consider someone linking to an article from Twitter or Facebook, or posting a screenshot of a paragraph from someone else's book. The valence of the reaction from the original creators seems to vary according to how the spoils of resharing are divvied up. The backlash to Instagram accounts like @thefatjewish and @fuckjerry may stem from the fact that they don't really share value from those whose jokes they redistribute, whereas posting an excerpt from a book on Twitter, for example, generates welcome publicity for the author.

Social Capital Games as Temporary Energy Sources

Structured properly, social capital incentive structures can serve as an invaluable incentive. For example, curation of good content across the internet remains an never-ending problem in this age of infinite content, so offering rewards for surfacing interesting things remains one of the oldest and most reliable marketplaces of the internet.

A canonical example is Reddit, where users bring interesting links, among other content, in exchange for a currency literally named karma. Accumulate enough karma and you'll unlock other benefits, like the ability to create your own subreddit, or to join certain private subreddits.

Twitter is another social network where people tend to bring interesting content in the hopes of amassing more followers and likes. If you follow enough of the right accounts, Twitter becomes an interestingness pellet dispenser.

Some companies which aren't typically thought of as social networks still turn to social capital games to solve a particular problem. On one Christmas vacation, I stumbled downstairs for a midnight snack and found my friend, a father of three, still up, typing on his laptop. What, I asked, was he doing still up when he had to get up in a few hours to take care of his kids? He was, he admitted sheepishly, banging out a litany of reviews to try to maintain his Yelp Elite status. To this day, some of my friends still speak wistfully about some of the Yelp Elite parties they attended back in the day.

Think of how many reviews Yelp accumulate in the early days just by throwing a few parties? It was, no doubt, well worth it, and at the point when it isn't (what's the marginal value of writing the, at last count, 9655th review of Ippudo in New York City?), it's something easily dialed back or deprecated.

Amazon isn't typically thought of as a company that understands social, but in its earliest days, before even Yelp, it employed a similar tactic to boost its volume of user reviews. Amazon Top Reviewers was a globally ranked list of every reviewer on all of Amazon. You could boost your standing by accumulating more useful review votes from shoppers for your reviews. I'll always remember Harriet Klausner, who dominated that list for years, reviewing seemingly every book in print. Amazon still maintains a top customer reviewer list, but it has been devalued over time as volume of reviews is no longer a real problem for Amazon.

Another example of a status game that Amazon employed to great effect, and which doesn't exist anymore, was Global Sales Rank. For a period, every product on Amazon got ranked against every other product in a dynamic sales rank leaderboard, and the figure would be displayed prominently near the top of each product detail page. Book authors pointed customers to Amazon to buy their books in the hope of goosing their sales rank the same way authors today often commit to buy some volume of their own book when it releases in the hopes of landing on the NYTimes bestseller list the week it releases.

IMDb and Wikipedia are two companies which built up entire valuable databases almost entirely by building mechanisms to harness the equal mix of status-seeking and altruism of domain experts. As with Reddit, accumulating a certain amount of reputation on these services unlocked additional abilities, and both companies built massive databases of information with very low production and editorial costs.

You can think of social capital accumulation incentives like these as ways to transform the potential energy of status into whatever form of kinetic energy your venture needs.

Why Most Celebrity Apps Fail

For a while, a trend among celebrities was to launch their own app. The Kardashian app is perhaps the most prominent example, but there are others.

From a social capital perspective, these create little value because they simply draw down upon the celebrity's own status. Almost every person who joins just wants content from the eponymous celebrity. The volume of interaction between the users of the app themselves, the fans, is minimal to non-existent. Essentially these apps are self-owned distribution channels for the stars, and as such, they tend to be vanity projects rather than durable assets.

One can imagine such apps trying to foster more interaction among the users, but that is a really complex effort, and most such efforts have neither the skills to take this on nor the will or capital necessary to see it through.

Another way to think of all these celebrity ventures is to measure the social capital and utility of the product or service if you remove all the social capital from the celebrity in question. A lot minus a lot equals zero.

Conclusion: Everybody Wants to Rule the World

In the immortal words of Obi-Wan Kenobi, "Status is what gives a Jedi his power. It's an energy field created by all living things. It surrounds us and penetrates us. It binds the galaxy together."

That many of the largest tech companies are, in part, status as a service businesses, is not often discussed. Most people don't like to admit to being motivated by status, and few CEO's are going to admit that the job to be done for their company is stroking people’s egos.

From a user perspective, people are starting to talk more and more about the soul-withering effects of playing an always-on status game through the social apps on their always connected phones. You could easily replace Status as a Service with FOMO as a Service. It’s one reason you can still meet so many outrageously wealthy people in Manhattan or Silicon Valley who are still miserable.

This piece is not my contribution to the well-trod genre of Medium thinkpieces counseling stoicism and Buddhism or transcendental meditation or deleting apps off of your phone to find inner peace. There is wisdom in all of those, but if I have anything to offer on that front, it’s this: if you want control of your own happiness, don’t tie it to someone else’s scoreboard.

Recall the wisdom of Neil McCauley in the great film Heat.

To get off the hedonic treadmill, heed the words of Robert DeNiro’s Neil McCauley in that classic film about status, Heat, “Don't let yourself get attached to any social capital you are not willing to walk out on in 30 seconds flat if you feel the heat around the corner.”

At the end of Heat, he fails to follow his own advice, and look what happened to him.

Yet, I come not to bury Caesar, but also not to praise him. Rather, as Emily Wilson says at the start of her brilliant new translation of The Odyssey, “tell me about a complicated man.” So much of the entire internet was built on a foundation of social capital, of intangible incentives like reputation. Before the tech giants of today, I combed through newsgroups, blogs, massive FAQs, and countless other resources built by people who felt, in part, a jolt of dopamine from the recognition that comes from contributing to the world at large. At Amazon, someone coined a term for this type of motivational currency: egoboo (short for, you guessed it, egoboost). Something like Wikipedia, built in large part on egoboo, is a damned miracle. I don’t want to lose that. I don’t think we have to lose that.

Of course, like the Force, status is equally potent as fuel for the darkest, cruelest parts of human nature. If you look at the respective mission statements of Twitter and Facebook—"to give everyone the power to create and share ideas and information instantly without barriers" and “to give people the power to share and make the world more open and connected”—what is striking is the assumption that these are fundamentally positive outcomes. There’s no questioning of what the downsides of connecting everyone and enabling instant sharing of information among anyone might be.

Of course, both companies, and many others, have now had to grapple with the often unbounded downside risk of just wiring together billions of people with few guardrails. Reading the Senate Intelligence Committee reports on Russian infiltration of social networks in the 2016 election, what emerges is unsettling: in so many ways the Russians had a more accurate understanding of the users of these services than the product teams running them. In either case, much of the cost has been born not by the companies themselves but society. Companies benefit from the limitless upside of their models, so it’s not unreasonable to expect them to bear the costs, just as we expect corporations to bear the cost of polluting rivers with their factories. If we did, as Hunter Walk has noted, profit margins would be lower, but society and discourse might be healthier.

Contrary to some popular Twitter counsel, the problem is not that the leaders of these companies don’t have humanities degrees. But the solution also doesn’t lie in ignoring that humans are wired to pursue social capital. In fact, overlooking this fundamental aspect of human nature arguably landed us here, at the end of this first age of social network goliaths, wondering where it all went haywire. If we think of these networks as marketplaces trading only in information, and not in status, then we're only seeing part of the machine. The menacing phone call has been coming from inside the house all along. Ben Thompson refers to this naivete from tech executives as the pollyannish assumption.

Having worked on multiple products in my career, I’m sympathetic to the fact that no product survives engagement with humans intact, But this first era of Status as a Service businesses is closing, and pleading ignorance won’t work moving forward. To do so is to come off like Captain Louis Renault in Casablanca.

casablanca-shocked-gambling.gif

Invisible asymptotes

"It is said that if you know your enemies and know yourself, you will not be imperiled in a hundred battles; if you do not know your enemies but do know yourself, you will win one and lose one; if you do not know your enemies nor yourself, you will be imperiled in every single battle." - Sun Tzu

My first job at Amazon was as the first analyst in strategic planning, the forward-looking counterpart to accounting, which records what already happened. We maintained several time horizons for our forward forecasts, from granular monthly forecasts to quarterly and annual forecasts to even five and ten year forecasts for the purposes of fund-raising and, well, strategic planning.

One of the most difficult things to forecast was our adoption rate. We were a public company, though, and while Jeff would say, publicly, that "in the short run, the stock market is a voting machine, in the long run, it's a scale," that doesn't provide any air cover for strategic planning. It's your job to know what's going to happen in the future as best as possible, and every CFO of a public company will tell you that they take the forward guidance portion of their job seriously. Because of information asymmetry, analysts who cover your company depend quite a bit on guidance on quarterly earnings calls to shape their forecasts and coverage for their clients. It's not just that giving the wrong guidance might lead to a correction in your stock price but that it might indicate that you really have no idea where your business is headed, a far more damaging long-run reveal.

It didn't take long for me to see that our visibility out a few months, quarters, and even a year was really accurate (and precise!). What was more of a puzzle, though, was the long-term outlook. Every successful business goes through the famous S-curve, and most companies, and their investors, spend a lot of time looking for that inflection point towards hockey-stick growth. But just as important, and perhaps less well studied, is that unhappy point later in the S-curve, when you hit a shoulder and experience a flattening of growth.

One of the huge advantages for us at Amazon was that we always had a fairly good proxy for our total addressable market (TAM). It was easy to pull the statistics for the size of the global book market. Just as a rule of thumb, one could say that if we took 10% of the global book market it would mean our annual revenues would be X. One could be really optimistic and say that we might even expand the TAM, but finance tends to be the conservative group in the company by nature (only the paranoid survive and all that).

When I joined Amazon I was thrown almost immediately into working with a bunch of MBA's on business plans for music, video, packaged software, magazines, and international. I came to think of our long-term TAM as a straightforward layer cake of different retail markets.

Still, the gradient of adoption was somewhat of a mystery. I could, in my model, understand that one side of it was just exposure. That is, we could not obtain customers until they'd heard of us, and I could segment all of those paths of exposure into fairly reliable buckets: referrals from affiliate sites (we called them Associates), referrals from portals (AOL, Excite, Yahoo, etc.), and word-of-mouth (this was pre-social networking but post-email so the velocity of word-of-mouth was slower than it is today). Awareness is also readily trackable through any number of well-tested market research methodologies.

Still, for every customer who heard of Amazon, how could I forecast whether they'd make a purchase or not? Why would some people use the service while others decided to pass?

For so many startups and even larger tech incumbents, the point at which they hit the shoulder in the S-curve is a mystery, and I suspect the failure to see it occurs much earlier. The good thing is that identifying the enemy sooner allows you to address it. We focus so much on product-market fit, but once companies have achieved some semblance of it, most should spend much more time on the problem of product-market unfit.

For me, in strategic planning, the question in building my forecast was to flush out what I call the invisible asymptote: a ceiling that our growth curve would bump its head against if we continued down our current path. It's an important concept to understand for many people in a company, whether a CEO, a product person, or, as I was back then, a planner in finance.

Amazon's invisible asymptote

Fortunately for Amazon, and perhaps critical to much of its growth over the years, perhaps the single most important asymptote was one we identified very early on. Where our growth would flatten if we did not change our path was, in large part, due to this single factor.

We had two ways we were able to flush out this enemy. For people who did shop with us, we had, for some time, a pop-up survey that would appear right after you'd placed your order, at the end of the shopping cart process. It was a single question, asking why you didn't purchase more often from Amazon. For people who'd never shopped with Amazon, we had a third party firm conduct a market research survey where we'd ask those people why they did not shop from Amazon.

Both converged, without any ambiguity, on one factor. You don't even need to rewind to that time to remember what that factor is because I suspect it's the same asymptote governing e-commerce and many other related businesses today.

Shipping fees.

People hate paying for shipping. They despise it. It may sound banal, even self-evident, but understanding that was, I'm convinced, so critical to much of how we unlocked growth at Amazon over the years.

People don't just hate paying for shipping, they hate it to literally an irrational degree. We know this because our first attempt to address this was to show, in the shopping cart and checkout process, that even after paying shipping, customers were saving money over driving to their local bookstore to buy a book because, at the time, most Amazon customers did not have to pay sales tax. That wasn't even factoring in the cost of getting to the store, the depreciation costs on the car, and the value of their time.

People didn't care about this rational math. People, in general, are terrible at valuing their time, perhaps because for most people monetary compensation for one's time is so detached from the event of spending one's time. Most time we spend isn't like deliberate practice, with immediate feedback.

Wealthy people tend to receive a much more direct and immediate payoff for their time which is why they tend to be better about valuing it. This is why the first thing that most ultra-wealthy people I know do upon becoming ultra-wealthy is to hire a driver and start to fly private. For most normal people, the opportunity cost of their time is far more difficult to ascertain moment to moment.

You can't imagine what a relief it is to have a single overarching obstacle to focus on as a product person. It's the same for anyone trying to solve a problem. Half the comfort of diets that promise huge weight loss in exchange for cutting out sugar or carbs or whatever is feeling like there's a really simple solution or answer to a hitherto intractable, multi-dimensional problem.

Solving people's distaste for paying shipping fees became a multi-year effort at Amazon. Our next crack at this was Super Saver Shipping: if you placed an order of $25 or more of qualified items, which included mostly products in stock at Amazon, you'd receive free standard shipping.

The problem with this program, of course, was that it caused customers to reduce their order frequency, waiting until their orders qualified for the free shipping. In select cases, forcing customers to minimize consumption of your product-service is the right long-term strategy, but this wasn't one of those.

That brings us to Amazon Prime. This is a good time to point out that shipping physical goods isn't free. Again, self-evident, but it meant that modeling Amazon Prime could lead to widely diverging financial outcomes depending on what you thought it would do to the demand curve and average order composition.

To his credit, Jeff decided to forego testing and just go for it. It's not so uncommon in technology to focus on growth to the exclusion of all other things and then solve for monetization in the long run, but it's easier to do so for a social network than a retail business with real unit economics. The more you sell, the more you lose is not and has never been a sustainable business model (people confuse this for Amazon's business model all the time, and still do, which ¯\_(ツ)_/¯).

The rest, of course, is history. Or at least near-term history. It turns out that you can have people pre-pay for shipping through a program like Prime and they're incredibly happy to make the trade. And yes, on some orders, and for some customers, the financial trade may be a lossy one for the business, but on net, the dramatic shift in the demand curve is stunning and game-changing.

And, as Jeff always noted, you can make micro-adjustments in the long run to tweak the profit leaks. For some really large, heavy items, you can tack on shipping surcharges or just remove them from qualifying for Prime. These days, some items in Amazon are marked as "Add-on items" and you can only order them in conjunction with enough other items such that they can be shipped with those items rather than in isolation.

[Jeff counseled the same "fix it later" strategy in the early days when we didn't have good returns tracking. For a window of time in the early days of Amazon, if you shipped us a box of books for returns, we couldn't easily tell if you'd purchase them at Amazon and so we'd credit you for them, no questions asked. One woman took advantage of this loophole and shipped us boxes and boxes of books. Given our limited software resources, Jeff said to just ignore the lady and build a way to solve for that later. It was really painful, though, so eventually customer service representatives all shared, amongst themselves, the woman's name so they could look out for it in return requests even before such systems were built. Like a mugshot pinned to every monitor saying "Beware this customer." A tip of the hat to you, maam, wherever you are, for your enterprising spirit in exploiting that loophole!]

Prime is a type of scale moat for Amazon because it isn't easy for other retailers to match from a sheer economic and logistical standpoint. As noted before, shipping isn't actually free when you have to deliver physical goods. The really challenging unit economics of delivery businesses like Postmates, when paired with people's aversion for paying for shipping, makes for tough sledding, at least until the cost of delivering such goods can be lowered drastically, perhaps by self-driving cars or drones or some such technology shift.

Furthermore, very few customers shop enough with retailers other than Amazon to make a pre-pay program like Prime worthwhile to them. Even if they did, it's very likely Amazon's economies of scale in shipping and deep knowledge of how to distribute their inventory optimally means their unit economics on delivery are likely superior.

The net of it is that long before Amazon hit what would've been an invisible asymptote on its e-commerce growth it had already erased it.

Know thine enemy.

Invisible asymptotes are...invisible

An obvious problem for many companies, however, is that they are creating new types of businesses and services that don't lend themselves to easily identifying such invisible asymptotes. Many are not like Amazon where there are readily tracked metrics like the size of the global book market with which to peg their TAM.

Take social networks, for example. What's the shoulder of the curve for something like Facebook? Twitter? Instagram? Snapchat?

Some of the limits to their growth are easier to spot than others. For messaging and some more general social networking apps, for example, in many cases network effects are geographical. Since these apps build on top of real-world social graphs, and many of those are geographically clustered, there are winner-take-all dynamics such that in many countries one messaging app dominates, like Kakao in Korea or Line in Taiwan. There can be geo-political considerations, too, that help ensure that that WeChat will dominate in China to the exclusion of all competitors, for example.

For others, though, it takes a bit more product insight, and some might say intuition, to see the ceiling before you bump into it. For both employees and investors, understanding product-market unfit follows very closely on identifying product-market fit as an existential challenge.

Without direct access to internal metrics and research, it's difficult to use much other than public information and my own product intuition to analyze potential asymptotes for many companies, but let's take a quick survey of several more prominent companies and consider some of their critical asymptotes (these companies are large enough that they likely have many, but I'll focus on the macro). You can apply this to startups, too, but there are some differences between achieving initial product market fit and avoiding the shoulder in the S-curve after already having found it.

Twitter

Let's start with Twitter, for many in tech the most frustrating product from the perspective of the gap between the actual and the potential. Its user growth has been flat for quite some time, and so it can be said to have already run full speed into an invisible asymptote. In quarterly earnings calls, it's apparent management often have no idea if or when or how that might shift because their guidance is often a collective shrug.

One popular early school of thought on Twitter, a common pattern with most social networks, is that more users need to experience what the power users or early adopters are experiencing and they'll turn into active users. Many a story of social networks who've continued to grow point to certain keystone metrics as pivotal to unlocking product-market fit. For example, once you've friended 30 people on Facebook, you're hooked. For Twitter, an equivalent may be following enough people to generate an interesting feed.

Pattern-matching moves more quickly through Silicon Valley than almost any other place I've lived, so stories like that are passed around through employees and Board meetings and other places where the rich and famous tech elite hobnob, and so it's not surprising that this theory is raised for every social network that hits the shoulder in their S-curve.

There's more than a whiff of Geoffrey Moore's Crossing the Chasm in this idea, some sense that moving from early adopters to the mainstream involves convincing more users to use the same product/service as early adopters do.

In the case of Twitter, I think the theory is wrong. Given the current configuration of the product, I don't think any more meaningful user growth is possible, and tweaking the product as it is now won't unlock any more growth. The longer they don't acknowledge this, the longer they'll be stuck in a Red Queen loop of their own making.

Sometimes, the product-market fit with early adopters is only that. The product won't go mainstream because other people don't want or need that product. In these cases, the key to unlocking growth is usually customer segmentation, creating different products for different users.

Mistaking one type of business for the other can be a deadly mistake because the strategies for addressing them are so different. A common symptom of this mistake is not seeing the shoulder in the S-curve coming at all, not understanding the nature of your product-market unfit.

I believe the core experience of Twitter has reached most everyone in the world who likes it. Let's examine the core attributes of Twitter the product (which I treat as distinct from Twitter the service, the public messaging protocol).

It is heavily text-based, with 140 and now 280 character limit snippets of text from people you've followed presented in a vertical scrolling feed in some algorithmic order, which, for the purposes of this exercise, I'll just consider roughly chronological.

For fans, most of whom are infovores, the nature of product-market fit is, as with many of our tech products today, one of addiction. Because the chunks of text are short, if one tweet is of no interest, you can quickly scan and scroll to another with little effort. Discovering tweets of interest in what appears to be a largely random order rewards the user with dopamine hits on that time-tested Skinner box variable frequency. Instead of rats hitting levers for pellets of food, power Twitter user push or pull on their phone screens for the next tasty pellet of text.

For infovores, text, in contrast to photos or videos or music, is the medium of choice from a velocity standpoint. There is deep satisfaction in quickly decoding the textual information, the scan rate is self-governed on the part of the reader, unlike other mediums which unfold at their own pace (this is especially the case with video, which infovores hate for its low scannability).

Over time, this loop tightens and accelerates through the interaction of all the users on Twitter. Likes and retweets and other forms of feedback guide people composing tweets to create more of the type that receive positive feedback. The ideal tweet (which I mean one that will receive maximum positive feedback) combines some number of the following attributes:

  • Is pithy. Sounds like a fortune cookie. The character limit encourages this type of compression.

  • Is slightly surprising. This can be a contrarian idea or just a cliche encoded in a semi-novel way.

  • Rewards some set of readers' priors, injecting a pleasing dose of confirmation bias directly into the bloodstream.

  • Blasts someone that some set of people dislike intensely. This is closely related to the previous point.

  • Is composed by someone famous, preferably someone a lot of people like but don't consider to be a full-time Tweeter, like Chrissy Teigen or Kanye West.

  • Is on a topic that most people think they understand or on which they have an opinion.

Of course, the set of ideal qualities varies by subgroup on Twitter. Black Twitter differs from rationalist Twitter which differs from NBA Twitter. The meta point is that the flywheel spins more and more quickly over time within each group.

The problem is that for those who don't use Twitter, almost all of its ideal attributes among the early adopter cohort are those which other people find bewildering and unattractive. Many people find the text-heavy nature of Twitter to be a turn-off. The majority of people, actually.

The naturally random sort order of ideas that comes from the structure of Twitter, one which pings the pleasure centers of the current heavy user cohort when they find an interesting tweet, is utterly perplexing to those who don't get the service. Why should they hunt and peck for something of interest? Why are conversations so difficult to follow (actually, this is a challenge even for those who enjoy Twitter)? Why do people have to work so hard to parse the context of tweets?

Falling into the trap of thinking other users will be like you is especially pernicious because the people building the product are usually among that early adopter cohort. The easiest north star for a product person is their own intuition. But if they're working on a product that requires customer segmentation, being in the early adopter cohort means one's instincts will keep guiding you towards the wrong North star and the company will just keep bumping into the invisible asymptote without any idea why.

This points to an important qualifier to the "crossing the chasm" idea of technology diffusion. If the chasm is large enough, the same product can't cross it. Instead, on the other side of the gaping chasm is just a different country altogether, with different constituents with different needs.

I use Twitter a lot (I recently received a notification I'd passed my 11-year anniversary of joining the service) but almost everyone in my family, from my parents to my siblings to my girlfriend to my nieces and nephews has tried and given up on Twitter. It doesn't fulfill any deep-seated need for any of them.

It's not surprising to me that Twitter is populated heavily by journalists and a certain cohort of techies and intellectuals who all, to me, are part of a broader species of infovore. For them, opening Twitter must feel as if they've donned Cerebro and have global contact with thousands of brains all over the world, as if the fabric of their brain had been flattened and stretched out wide and laid on top of that of millions of others brains all over the world.

Quiet, I am reading the tweets.

Mastering Twitter is already something this group of people do all the time in their lives and jobs, only Twitter accelerates it, like a bicycle for intellectual conversation and preening. Twitter, at its best, can provide a feeling of near real-time communal knowledge sharing that satisfies some of the same needs as something like SoulCycle or Peloton. A feeling of communion that also feels like it's productive.

If my instincts are right, then all the iterating around the margins on Twitter won't do much of anything to change the growth curve of the service. It might improve the experience for the current cohort of users and increase usage (for example, curbing abuse and trolls is an immediate and obvious win for those who experience all sorts of terrible harassment on the service), but it doesn't change the fact that this core Twitter product isn't for all the people who left the club long ago, soon after they walked in and realized it was just a bunch of nerds who'd ordered La Croix bottle service and were sitting around talking about Bitcoin and stoicism and transcendental meditation.

The good news is that the Twitter service, that public messaging protocol with a one-way follow model, could be the basis for lots of products that might appeal to other people in the world. Knowing the enemy can prevent wasting time chasing the wrong strategy.

Unfortunately, one of the main paths towards coming up with new products built on top of that protocol was the third party developer program, and, well, Twitter has treated its third party developers like unwanted stepchildren for a long time. For whatever reason, it's difficult to speculate without having been there, Twitter's rate of product development internally has been glacial. A vibrant third party-developer program could have helped by massively increasing the vectors of development on Twitter's very elegant public messaging protocol and datasets.

[Note, however, that I'm sympathetic to tech companies that restrict building clones of their service using their API's. No company owes it to others to allow people to build direct competitors to their own product. Most people don't remember, but Amazon's first web services offering was for affiliates to build sites to sell things. Some sites started building massive Amazon clones and so Amazon's web services evolved into other forms, eventually settling on what most people know it as today.]

In addition, I've long wondered if the shutting out of third party developers on Twitter was an attempt to aggregate and own all their own ad inventory. Both these problems could've been solved by tweaking the Twitter third party development program. Developers could be offered two paths.

One option is that for every X number of tweets a developer pulled, they'd have to carry and display a Twitter-inserted ad unit. This would make it possible for Twitter to support third-party clients like Tweetbot that compete somewhat with Twitter's own clients. Maybe one of these developers would come up with improvements on top of Twitter's own client apps, but in doing so they'd increase Twitter's ad inventory.

The second option would be to pay some fixed fee for every X tweets pulled. That would force the developer to come up with some monetization scheme on their own to cover their usage, but at least the option would exist. I don't doubt that some enterprising developers might come up with some way to monetize a particular use case, for example for business research.

Twitter the product/app has hit its invisible asymptote. Twitter the protocol still has untapped potential.

Snapchat

Snapchat is another example of a company that's hit a shoulder in its growth curve. Unlike Twitter, though, I suspect its invisible asymptote is less an issue of its feature set and more one of a generational divide.

That's not to say that making the interface less inscrutable earlier on wouldn't have helped a bit, but I suspect only at the margins. In fact, the opaque nature of the interface probably served Snapchat incredibly well when the product came along, regardless of whether or not it was intended that way. Snapchat came along at a moment when kids' parents were joining Facebook, and when Facebook had been around long enough for the paper trail of its early, younger users to come back and bite some of them.

Along comes a service that not only wipes out content by default after a short period of time but is inscrutable to the very parents who might crash the party. In fact, there's an entire class of products for which I believe an Easter Egg-like interface is actually preferable to an elegant, self-describing interface, long seen as the apex of UI design (more on that another day).

I've written before about selfies as a second language. At the root of that phenomenon is the idea that a generation of kids raised with smartphones with a camera front and back have found the most efficient way to communicate is with the camera, not the keyboard. That's not the case for an older cohort of users who almost never send selfies as a first resort. The very default of Snapchat to the camera screen is such a bold choice that it will probably never be the messaging app of choice for old folks, no matter how Snapchat moves around and re-arranges its other panes.

More than that, I suspect every generation needs spaces of its own, places to try on and leave behind identities at low cost and on short, finite time horizons. That applies to social virtual spaces as much as it does to physical spaces.

Look at how old people use Snapchat and you'll see lots of use of Stories. Watch a young person use Snapchat and it's predominantly one-to-one messaging using the camera (yes, I know some of the messages I receive on Snap are the same ones that person is sending to everyone one-to-one, but the hidden nature of that behavior allows me to indulge an egocentric rather than Copernican model of the universe). Now, it's possible for one app to serve multiple audiences that way, but it will either have to compromise all or some of its user experience to do so.

At a deeper level, I think a person's need for ephemeral content varies across one's lifetime. It's of much higher value when one is young, especially in formative years. As one ages, and time's counter starts to run low, one turns nostalgic, and the value of permanent content, especially from long bygone days, increases, serving as beautifully aged relics of another era. One also tends to be more adept at managing one's public image the more time passes, lessening the need for ephemerality.

All this is to say that I don't think making the interface of Snapchat easier to use is going to move it off of the shoulder on its S-curve. That's addressing a different underlying cause than the one that lies behind its invisible asymptote.

The good news for Snapchat is that I don't think Facebook is going to be able to attract the youngsters. I don't care if Facebook copies Snapchat's exact app one for one, it's not going to happen. The bad news for Snapchat is that it probably isn't going to attract the oldies either. The most interesting question is whether Snapchat's cohort stays with it for life, and the next interesting question is who attracts the next generation of kids to get their first smartphones. Will they, like every generation of youth before them, demand a social network of their own? Sometimes I think they will just to claim a namespace that isn't already spoken for. Who wants to be joesmith43213 when you can be joesmith on some new sexy social network?

As a competitor, however, Instagram is more worrisome than Facebook. It came along after Facebook, as Snapchat did, and so it had the opportunity to be a social network that a younger generation could roam as pioneers, mining so much social capital yet to be claimed. It is also largely an audio-visual network which is appealing to a more visually literate generation.

When Messenger incorporated Stories into its app, it felt like a middle-aged couple dressing in cowboy chic and attending Coachella. When Instagram cribbed Stories, though, it addressed a real supply-side content creation issue for the same young'uns who used Snapchat. That is, people were being too precious about what they shared on Instagram, decreasing usage frequency. By adding Stories, they created a mechanism that wouldn't force content into the feed and whose ephemerality encouraged more liberal capture and sharing without the associated guilt.

This is a general pattern among social networks and products in general: to broaden their appeal they tend to broaden their use cases. It's rare to see a product adhere strictly to its early specificity and still avoid hitting a shoulder in their adoption S-curve. Look at Facebook today compared to Facebook in its early days. Look at Amazon's product selection now compared to when it first launched.

It takes internal fortitude for a product team to make such concessions (I would say courage but we need to sprinkle that term around less liberally in tech). The stronger the initial product market fit, the more vociferously your early adopters will protest when you make any changes. Like a band that is accused of selling out, there is an inevitable sense that a certain sharpness of flavor, of choice, has seeped out as more and more people join up and as a service loosens up and accommodates more more use cases.

I remember seeing so many normally level-headed people on Twitter threaten to abandon the service when they announced they were increasing the character limit from 140 to 280. The irony, of course, was that the character-limit increase likely improved the service for its current users while doing nothing to attract people who didn't use the service, even though the move was addressed mostly to heathen.

Back to Snapchat. I wrote a long time ago that the power of social networks lies in their graph. That means many things, and in Snapchat's case it holds a particularly fiendish double bind. That Snapchat is the social network claimed by the young is both a blessing and a curse. Were a bunch of old folks to suddenly flock to Snapchat, it might induce a case of Groucho Marx's, "I don't care to belong to a club that accepts people like me as members."

Facebook

On the dimension of utility, Facebook's network effects continue to be pure and unbounded. The more people that are on Facebook, the more it's useful for certain things for which a global directory is useful. Even though many folks don't use Facebook a lot, it's rare I can't find them on Messenger if I don't have their email address or phone number. The complexity of analyzing Facebook is that it serves different needs in different countries and markets, social network having strong path dependency in their usage patterns. In many countries, Facebook is the internet; it's odd as an American to travel to countries where businesses' only presence online is a Facebook page, so accustomed I am to searching for American businesses on the web or Yelp first.

When it comes to the "social" aspect of social networking, the picture is less clear-cut. Here I'll focus on the U.S. market since it's the one I'm most familiar with. Because Facebook is the largest social network in history, it may be encountering scaling challenges few other entities have ever seen.

The power of a social network lies in its graph, and that is a conundrum in many ways. One is that a massive graph is a blessing until it's a curse. For social creatures like humans who've long lived  in smaller networks and tribes, a graph that conflates everyone you know is intimidating to broadcast to, except for those who have no compulsion about performing no matter the audience size: famous people, marketers, and those monstrous people who share everything about their lives. You know who you are.

This is one of the diseconomies of scale for social networks that Facebook is first to run into because of its unprecedented size. Imagine you're in a room with all your family, friends, coworkers, casual acquaintances, and a lot of people you met just once but felt guilty about rejecting a friend request from. It's hundreds, maybe even thousands of people. What would you say to them? We know people maintain multiple identities for different audiences in their lives. Very few of us have to cultivate an identity for that entire blob of everyone we know. It's a situation one might encounter in the real world only a few times in life, perhaps at one's wedding, and later one's funeral. Online, though? It happens to be the default mode on Facebook's News Feed.

It's no coincidence that public figures, those who have the most practice at having to deal with this problem, are so guarded. As your audience grows larger, the chance that you'll offend someone deeply with something you say approaches 1.

When I scan my Facebook feed, I see fewer and fewer people I know sharing anything at all. Map one's sharing frequency with the size of one's friend list on Facebook and I highly suspect it looks like this:

friend-post-frequency-facebook.png

Again, not everyone is like this, some psychopaths who are comfortable sharing their thoughts no matter the size of the audience, but these people are often annoying, the type who dive right into politics at Thanksgiving before you've even spooned gravy over your turkey. This leads to a form of adverse selection where a few over-sharers take over your News Feed.

[Not everything one shares gets distributed to one's entire friend graph given the algorithmic feed. But you as the one sharing something have no idea who will see it so you have to assume that any and every person in your graph will see it. The chilling effect is the same.]

Another form of diseconomy of scale is behind the flight to Snapchat among the young, as outlined earlier. A sure way to empty a club or a dance floor is to have the parents show up; few things are more traumatic then seeing your Dad pretend-grind on your Mom when "Yeah" by Usher comes on. Having your parents in your graph on Facebook means you have to assume they're listening, and there isn't some way to turn on the radio very loudly or run the water as in a spy movie when you're trying to pass secrets to someone in a room that's bugged. The best you can do is communicate in code to which your parents don't own the decryption key; usually this takes the form of memes. Or you take the communication over to Snapchat.

Another diseconomy of scale is the increasing returns to trolling. Facebook is more immune to this thanks to its bi-directional friending model than, say, Twitter, with its one-way follow model and public messaging framework. On Facebook, those wishing to sow dissension need to be a bit more devious, and as revelations from the last election showed, there are means to a person's heart, to reach them directly or indirectly, through confirmation bias and flattery. The Iago playbook from Othello. On Twitter, there's no need for such scheming, you can just nuke people from your keyboard without their consent.

All of this is to say I suspect many of Facebook's more fruitful vectors for rekindling their value for socializing lie in breaking up the surface area of their service. News Feed is so monolithic a surface as to be subject to all the diseconomies of scale of social networking, even as it makes it such an attractive advertising landscape.

The most obvious path to this is Groups, which can subdivide large graphs into ones more unified in purpose or ideology. Google+ was onto something with Circles, but since they hadn't actually achieved any scale they were solving a problem they didn't have yet.

Instagram

Where is Instagram's invisible asymptote? This is one of the trickier ones to contemplate as it continues to grow without any obvious end in sight.

One of the advantages to Instagram is that it came about when Facebook was broadening its acceptable media types from text to photos and video. Instagram began with just square photos with a simple caption, no links allowed, no resharing.

This had a couple of advantages. One is that it's harder to troll or be insufferable in photos than it is in text. Photos tend to soften the edge of boasts and provocations. More people are more skilled at being hurtful in text than photos. Instagram has tended to be more aggressive than other networks at policing the emotional tenor of its network, especially in contrast to, say Twitter, turning its attention most recently to addressing trolls in the comment sections.

Of course photos are not immune to this phenomenon. The "look at my perfect life" boasting of Instagram is many people's chief complaint about the app and likely the primary driver of people feeling lousy after looking through their feed there. Still, outright antagonism with Instagram, given it isn't an open public graph like Twitter, is harder. The one direct vector is comments and Instagram is working on that issue.

In being a pure audio-visual network at a time when Facebook and most other networks were mixed-media, Instagram siphoned off many people for whom the best part of Facebook was just the photos and videos; again, we often, as with Twitter, over-estimate the product-market fit and TAM of text. If Facebook just showed photos and videos for a week I suspect their usage would grow, but since they own Instagram...

As with other social networks that grow, Instagram broadened its formats early on to head off several format-based asymptotes. Non-square photos and videos with gradually lengthening time limits have broadened the use cases and, more importantly, removed some level of production friction.

The move to copy Snapchat's Stories format was the next giant asymptote avoided. The precious nature of sharing on Instagram was a drag on posting frequency. Stories solves the supply-side issue for content several ways. One is that since it requires you to explicitly tap into viewing it from the home feed screen, it shifts the onus for viewing the content entirely to the audience. This frees the content creator from much of the guilt of polluting someone else's feed. The expiring nature of the content further removes another of a publisher's inhibitions about littering the digital landscape. It unlocked so much content that I now regularly fail to make it through more than a tiny fraction of Stories on Instagram. Even friends who don't publish a lot now often put their content in Stories rather than posting to the main feed.

The very format of Stories, with its full-screen vertical orientation, cues the user that this format is meant for the native way we hold our devices as smartphone photographers, rather than accommodating the more natural landscape way that audiences view the world, with eyes side-by-side in one's head. Stories includes accoutrements like gaudy stickers and text overlays and face filters that aren't in the toolset for Instagram's main feed photo/video composer, perhaps to preserve some aesthetic separation between the main feed and Stories.

There is a purity about Instagram which makes even its ads perfectly native: everything on the service is an audio-visual advertisement. I see people complain about the ad load in Instagram, but if you really look at your feed, it's always had an ad load of 100%.

I just opened my feed and looked through the first twenty posts, and I'd classify them all as ads: about how great my meal was, for beautiful travel destinations, for the exquisite craft of various photographers and cinematographers, for an actor's upcoming film, for Rihanna's new lingerie line or makeup drop, for an elaborate dish a friend cooked, for a current concert tour, for how funny someone is, for someone's gorgeous new headshot, and for a few sporting events and teams. And yes, a few of them were official Instagram ads.

I don't mean this in a negative way. One might lobby this accusation at all social networks, but the visual nature of Instagram absorbs the signaling function of social media in the most elegant and unified way. For example, messaging apps consist of a lot of communication that isn't advertising. But that's exactly why a messaging app like Messenger isn't as lucrative an ad platform as Instagram is and will be. If ads weren't marked explicitly, and if they weren't so obviously from accounts I don't follow, it's not clear to me that they'd be so jarringly different in nature than all the other content in the feed.

The irony is that, as Facebook broadened its use cases and supported media types to continue to expand, the purity of Instagram may have made it more scalable a network in some ways.

Of course, every product or service has some natural ceiling. To take one example, messaging with other folks is still somewhat clunky on Instagram, it feels tacked on. Considering how much young people use Snapchat as a messaging app of choice, there's likely attractive headroom for Instagram here.

Rumors Instagram is contemplating a separate messaging app make sense. It would be ironic if Instagram separated out the more broadcast nature of its core app from the messaging use case in two different apps before Snapchat did. As noted earlier, it feels as if Snapchat is constantly fighting to balance the messaging parts of its app with the more broadcast elements like Stories and Discover, and separate apps might be one way to solve that more effectively.

As with all social networks which are mobile-phone dominant, there are limits to what can be optimized for in a single app, when all you have to work with is a single rectangular phone screen. The mobile phone revolution forced a focus in design which created billions of dollars in value, but Instagram, like all phone apps, will run into the asymptote that is the limits of how much you can jam into one app.

Instagram has already had some experience in dealing with this conundrum, creating separate apps like Boomerang or Hyperlapse that keep a lid on the complexity of the Instagram app itself and which bring additional composition techniques to the top level of one's phone. I often hear people counsel against launching separate apps because of the difficulty of getting adoption of even a single app, but that doesn't mean that separate apps aren't sometimes the most elegant way to deal with the spatial design constraints of mobile.

On Instagram, content is still largely short in nature so longer narratives aren't common or well-supported. The very structure, oriented around a main feed algorithmically compiling a variety of content from all the account you follow, isn't optimized towards a deep dive into a particular subject matter or narrative like, say, a television or a streaming video app. The closest to long-form on Instagram is Live, but most of what I see of that is only loosely narrative, resembling more an extended selfie than a considered narrative. Rather than pursue long-form narrative, it may be that a more on-brand way to tackle the challenge of lengthening usage of the app is better stringing together of existing content, similar to how Snapchat can aggregate content from one location into a feed of sorts. That can be useful for things like concerts and sporting events and breaking news events like natural disasters, protests, and marches.

In addition, perhaps there is a general limit to how far a single feed of random content arranged algorithmically can go before we suffer pure consumption exhaustion. Perhaps seeing curated snapshots from everyone will finally push us all to the breaking point of jealousy and FOMO and, across a large enough number of users, an asymptote will emerge.

However, I suspect we've moved into an age where the upper bound on vanity fatigue has shifted much higher in a way that an older generation might find unseemly. Just as we've moved into a post-scarcity age of information, I believe we've moved into a post-scarcity age of identity as well. And in this world, it's more acceptable to be yourself and leverage social media for maximal distribution of yourself in a way that ties to the fundamental way in which the topology of culture has shifted from a series of massive centralized hub and spokes to a more uniform mesh.

A last possible asymptote relates to my general sense that massive networks like Facebook and Instagram will, at some point, require more structured interactions and content units (for example, a list is a structured content unit, as is a check-in) to continue scaling. Doing so always imposes some additional friction on the content creator, but the benefit is breaking one monolithic feed into more distinct units, allowing users the ability to shift gears mentally by seeing and anticipating the structure, much like how a magazine is organized.

To fill gaps in a person's free time, an endless feed is like an endless jar of liquid, able to be poured into any crevice in one's schedule and flow of attention. To demand a person's time, on the other hand, is a higher order task, and more structured content seems to do better on that front. People set aside dedicated time to play games like Fortnite or to watch Netflix, but less so to browse feeds. The latter happens on the fly. But ambition in software-driven Silicon Valley is endless and so at some point every tech company tries to obtain the full complement of Infinity Stones, whether by building them or buying them, like Facebook did with Instagram and Whatsapp.

Amazon's next invisible asymptote?

I started with Amazon, but it is worth revisiting as it is hardly done with its own ambitions. After having made such massive progress on the shipping fee asymptote, what other barriers to growth might remain?

On that same topic of shipping, the next natural barrier is shipping speed. Yes, it's great that I don't have to pay for shipping, but in time customer expectations inflate. Per Jeff's latest annual letter to shareholders:

One thing I love about customers is that they are divinely discontent. Their expectations are never static – they go up. It’s human nature. We didn’t ascend from our hunter-gatherer days by being satisfied. People have a voracious appetite for a better way, and yesterday’s ‘wow’ quickly becomes today’s ‘ordinary’. I see that cycle of improvement happening at a faster rate than ever before. It may be because customers have such easy access to more information than ever before – in only a few seconds and with a couple taps on their phones, customers can read reviews, compare prices from multiple retailers, see whether something’s in stock, find out how fast it will ship or be available for pick-up, and more. These examples are from retail, but I sense that the same customer empowerment phenomenon is happening broadly across everything we do at Amazon and most other industries as well. You cannot rest on your laurels in this world. Customers won’t have it.

Why only two-day shipping for free? What if I want my package tomorrow, or today, or right now?

Amazon has already been working on this problem for over a decade, building out a higher density network of smaller distribution centers over its previous strategy of fewer, gargantuan distribution hubs. Drone delivery may have sounded like a joke when first announced on an episode of 60 Minutes, but it addresses the same problem, as does a strategy like Amazon lockers in local retail stores.

Another asymptote may be that while Amazon is great at being the site of first resort to fulfill customer demands for products, it is less capable when it comes to generating desire ex nihilo, the kind of persuasion typically associated more with a tech company like Apple or any number of luxury retailers.

At Amazon we referred to the dominant style of shopping on the service as spear-fishing. People come in, type a search for the thing they want, and 1-click it. In contrast, if you've ever gone to a mall with someone who loves shopping for shopping's sake, a clotheshorse for example, you'll see a method of shopping more akin to the gathering half of hunting and gathering. Many outfits are picked off the rack and gazed at, held up against oneself in a mirror, turned around and around in the hand for contemplation. Hands brush across racks of clothing, fingers feeling fabric in search of something unknown even to the shopper.

This is browsing, and Amazon's interface has only solved some aspects of this mode of shopping. If you have some idea what you want, similarities carousels can guide one in some comparison shopping, and customer reviews serve as a voice on the shoulder, but it still feels somewhat utilitarian.

Amazon's first attempts at physical stores reflect this bias in its retail style. I visited an Amazon physical bookstore in University Village the last time I was in Seattle, and it struck me as the website turned into 3-dimensional space, just with a lot less inventory. Amazon Go sounds more interesting, and I can't wait to try it out, but again, its primary selling point is the self-serve, low-friction aspect of the experience.

When I think of creating desire, I think of my last and only visit to Milan, when a woman at an Italian luxury brand store talked me into buying a sportcoat I had no idea I wanted when I walked into the store. In fact, it wasn't even on display, so minimal was the inventory when I walked in.

She looked at me, asked me some questions, then went to the back and walked back out with a single option. She talked me into trying it on, then flattered me with how it made me look, as well as pointing out some of its most distinctive qualities. Slowly, I began to nod in agreement, and eventually I knew I had to be the man this sportcoat would turn me into when it sat on my shoulders.

This challenge isn't unique to Amazon. Tech companies in general have been mining the scalable ROI of machine learning and algorithms for many years now. More data, better recommendations, better matching of customer to goods, or so the story goes. But what I appreciate about luxury retail, or even Hollywood, is its skill for making you believe that something is the right thing for you, absent previous data. Seduction is a gift, and most people in technology vastly overestimate how much of customer happiness is solvable by data-driven algorithms while underestimating the ROI of seduction.

Netflix spent $1 million on a prize to improve its recommendation algorithms, and yet it's a daily ritual that millions of people stare at their Netflix home screen, scrolling around for a long time, trying to decide what to watch. It's not just Netflix, open any streaming app. The AppleTV, a media viewing device, is most often praised for its screensaver! That's like admitting you couldn't find anything to eat on a restaurant menu but the typeface was pleasing to the eye. It's not that data can't guide a user towards the right general neighborhood, but more than one tech company will find the gradient of return on good old seduction to be much steeper than they might realize.

Still, for Amazon, this may not be as dangerous a weakness as it would be for another retailer. Much of what Amazon sells is commodities, and desire generation can be offloaded to other channels who then see customers leak to Amazon for fulfillment. Amazon's logistical and customer service supremacy is a devastatingly powerful advantage because it directly precedes and follows the act of payment in the shopping value chain, allowing it to capture almost all the financial return of commodity retail.

And, as Jeff's annual letter to shareholders has emphasized from the very first instance, Amazon's mission is to be the world's most customer-centric company. One way to continue to find vectors for growth is to stay attached at the hip to the fickle nature of customer unhappiness, which they're always quite happy to share under the right circumstances, one happy consequence of this age of outrage. There is such a thing as a price umbrella, but there's also one for customer happiness.

How to identify your invisible asymptotes

One way to identify your invisible asymptotes is to simply ask your customers. As I noted at the start of this piece, at Amazon we honed in on how shipping fees were a brake on our business by simply asking customers and non-customers.

Here's where the oft-cited quote from Henry Ford is brought up as an objection: “If I had asked people what they wanted, they would have said faster horses," he is reputed to have said. Like most truisms in business, it is snappy and lossy all at once.

True, it's often difficult for customers to articulate what they want. But what's missed is that they're often much better at pinpointing what they don't want or like. What you should hear when customers say they want a faster horse is not the literal but instead that they find travel by horse to be too slow. The savvy product person can generalize that to the broader need of traveling more quickly, and that problem can be solved any number of ways that don't involve cloning Secretariat or shooting their current horse up with steroids.

This isn't a foolproof strategy. Sometimes customers lie about what they don't like, and sometimes they can't even articulate their discontent with any clarity, but if you match their feedback with good analysis of customer behavior data and even some well-designed tests, you can usually land on a more accurate picture of the actual problem to solve.

A popular sentiment in Silicon Valley is that B2C businesses are more difficult product challenges than B2B because products and services for the business customer can be specified merely by talking to the customer while the consumer market is inarticulate about its needs, per the Henry Ford quote. Again, that's only partially true, and so many consumer companies I've been advising recently haven't pushed enough yet on understanding or empathizing with the objections of its non-adopters.

We speak often of the economics concept of the demand curve, but in product there is another form of demand curve, and that is the contour of the customers' demands of your product or service. How comforting it would be if it were flat, but as Bezos noted in his annual letter to shareholders, the arc of customer demands is long, but it bends ever upwards. It's the job of each company, especially its product team, to continue to be in tune with the topology of this "demand curve."

I see many companies spend time analyzing funnels and seeing who emerges out the bottom. As a company grows, though, and from the start, it's just as important to look at those who never make it through the funnel, or who jump out of it at the very top. If the product market fit gradient likely differs for each of your current and potential customer segments, and understanding how and why is a never-ending job.

When companies run focus groups on their products, they often show me the positive feedback. I'm almost invariably more interested in the folks who've registered negative feedback, though I sense many product teams find watching that material to be stomach-churning. Sometimes the feedback isn't useful in the moment; perhaps you have such strong product-market fit with a different cohort that it isn't useful. Still, it's never not a bit of a prick to the ego.

However, all honest negative feedback forms the basis of some asymptote in some customer segment, even if the constraint isn't constricting yet. Even if companies I meet with don't yet have an idea of how to deal with a problem, I'm always curious to see if they have a good explanation for what that problem is.

One important sidenote on this topic is that I'm often invited to give product feedback, more than I can find time for these days. When I'm doing so in person, some product teams can't help but jump in as soon as I raise any concerns, just to show they've already anticipated my objections.

I advise just listening all the way through the first time, to hear the why of someone's feedback, before cutting them off. You'll never be there in person with each customer to talk them out of their reasoning, your product or service has to do that work. The batting average of product people who try to explain to their customers why they're wrong is...not good. It's a sure way to put them off of giving you feedback in the future, too.

Even absent external feedback, it's possible to train yourself to spot the limits to your product. One approach I've taken when talking to companies who are trying to achieve initial or new product-market fit is to ask them why every person in the world doesn't use their product or service. If you ask yourself that, you'll come up with all sorts of clear answers, and if you keep walking that road you'll find the borders of your TAM taking on greater and greater definition.

[It's true that you also need the flip side, an almost irrational positivity, to be able to survive the difficult task of product development, or to be an entrepreneur, but selection bias is such that most such people start with a surplus of optimism.]

Lastly, though I hesitate to share this, it is possible to avoid invisible asymptotes through sheer genius of product intuition. I balk for the same reason I cringe when I meet CEO's in the valley who idolize Steve Jobs. In many ways, a product intuition that is consistently accurate across time is, like Steve Jobs, a unicorn. It's so rare an ability that to lean entirely on it is far more dangerous and high risk than blending it with a whole suite of more accessible strategies.

It's difficult for product people to hear this because there's something romantic and heroic about the Steve Jobs mythology of creation, brilliant ideas springing from the mind of the mad genius and inventor. However, just to read a biography of Jobs is to understand how rare a set of life experiences and choices shaped him into who he was. Despite that, we've spawned a whole bunch of CEO's who wear the same outfit every day and drive their design teams crazy with nitpick design feedback as if the outward trappings of the man were the essence of his skill. We vastly underestimate the path dependence of his particular style of product intuition.

Jobs' gift is so rare that it's likely even Apple hasn't been able to replace it. It's not a coincidence that the Apple products that frustrate me the most right now are all the ones with "Pro" in the name. The MacBook Pro, with its flawed keyboard and bizarre Touch Bar (I'm still using the old 13" MacBook Pro with the old keyboard, hoping beyond hope that Apple will come to its senses before it becomes obsolete). The Mac Pro, which took on the unfortunately apropos shape of a trash can in its last incarnation and whose replacement hasn't shipped in years (I'm still typing this at home on an ancient cheese grater Mac Pro tower and ended up building a PC tower to run VR and to do photo and video editing). Final Cut Pro, which I learned on in film editing school, and which got zapped in favor of Final Cut X just when FCP was starting to steal meaningful share in Hollywood from Avid. The iMac Pro, which isn't easily upgradable but great if you're a gazillionaire.

Pro customers are typically ones with the most clearly specified needs and workflows. Thus, their products are ones for whom listening to them articulate what they want is a reliable path to establishing and maintaining product-market fit. But that's not something Apple seems to enjoy doing, and so the mis-steps they've made on these lines are exactly the types of mistakes you'd expect of them.

[I was overjoyed to read that Apple's next Mac Pro is being built using extensive feedback from media professionals. It's disappointing that it won't arrive until 2019 now but at least Apple has descended from the ivory tower to talk to the actual future users. It's some of the best news out of Apple I've heard in forever.]

Live by intuition, die by it. It's not surprising that Snapchat, another company that lives by the product intuition of one person, stumbled with a recent redesign. That a company's strengths are its weaknesses is simply the result of tight adherence to methodology. Apple and Snapchat's deus ex machina style of handing down products also rid us of CD-ROM drives and produced the iPhone, AirPods, the camera-first messaging app, and the Story format, among many other breakthroughs which a product person could hang a career on.

Because products and services live in an increasingly dynamic world, especially those targeted at consumers, they aren't governed by the immutable, timeless truths of a field like mathematics. The reason I recommend a healthy mix of intuition informed by data and feedback is that most product people I know have a product view that is slower moving than the world itself. If they've achieved any measure of success, it's often because their view of some consumer need was the right one at the right time. Product-market fit as tautology. Selection bias in looking at these people might confuse some measure of luck with some enduring product intuition.

However, just as a VC might have gotten lucky once with some investment and be seen as a genius for life (and the returns to a single buff of a VC brand name is shockingly durable), just because a given person's product intuition might hit on the right moment at the right point in history to create a smash hit, it's rare that a single person's frame will move in lock step with that of the world. How many creatives are relevant for a lifetime?

This is one reason sustained competitive advantage is so difficult. In the long run, endogenous variance in the quality of product leadership in a company always seems to be in the negative direction. But perhaps we are too focused on management quality and not focused enough on exogenous factors. In "Divine Discontent: Disruption’s Antidote," Ben Thompson writes:

Bezos’s letter, though, reveals another advantage of focusing on customers: it makes it impossible to overshoot. When I wrote that piece five years ago, I was thinking of the opportunity provided by a focus on the user experience as if it were an asymptote: one could get ever closer to the ultimate user experience, but never achieve it:

stratechery-disruption-diagram-1.png

In fact, though, consumer expectations are not static: they are, as Bezos’ memorably states, “divinely discontent”. What is amazing today is table stakes tomorrow, and, perhaps surprisingly, that makes for a tremendous business opportunity: if your company is predicated on delivering the best possible experience for consumers, then your company will never achieve its goal.

stratechery-disruption-diagram-2.png

In the case of Amazon, that this unattainable and ever-changing objective is embedded in the company’s culture is, in conjunction with the company’s demonstrated ability to spin up new businesses on the profits of established ones, a sort of perpetual motion machine; I’m not sure that Amazon will beat Apple to $1 trillion, but they surely have the best shot at two.

Pattern recognition is the default operation mode of much of Silicon Valley and other fields, but it is almost always, by its very nature, backwards-looking. One can hardly blame most people for resorting to it because it's a way of minimizing blame, and the economic returns of the Valley are so amplified by the structural advantages of winners that even matching market beta makes for a comfortable living.

However, if consumer desires are shifting, it's always just a matter of time before pattern recognition leads to an invisible asymptote. One reason startups are often the tip of the spear for innovation in technology is that they can't rely on market beta to just roll along. Achieving product-market fit for them is an existential challenge, and they have no backup plans. Imagine an investor who has to achieve alpha to even survive.

Companies can stay nimble by turning over its product leaders, but as a product professional, staying relevant to the marketplace is a never-ending job, even if your own life is irreversible and linear. I find the best way to unmoor myself from my most strongly held product beliefs is to increase my inputs. Besides, the older I get, the more I've grown to enjoy that strange dance with the customer. Leading a partner in a dance may give you a feeling of control, but it's a world of difference from dancing by yourself.

One of my favorite Ben Thompson posts is "What Clayton Christensen Got Wrong" in which he built on Christensen's theory of disruption to note that low end disruption can be avoided if you can differentiate on user experience. It is difficult and perhaps even impossible to over-serve on that axis. Tesla came into the electric car market with a car that was way more expensive than internal combustion engine cars (this definitely wasn't low-end disruption), had shorter range, and required really slow charging at a time when very few public chargers existed yet.

However, Tesla got an interesting foothold because on another axis it really delivered. Yes, the range allowed for more commuting without having to charge twice a day, but more importantly, for the wealthy, it was a way to signal one's environmental consciousness in a package that was much, much sexier than the Prius, the previous electric car of choice of celebrities in LA. It will be hard for Tesla to continue to rely on that in the long run as the most critical dimension of user experience will likely evolve, but it's a good reminder that "user experience" is broad enough to encompass many things, some less measurable than others.

You can't overserve on user experience, Thompson argues; as a product person, I'd argue, in parallel, that it is difficult and likely impossible to understand your customer too deeply. Amazon's mission to the be the world's most customer-centric company is inherently a long-term strategy because it is a one with an infinite time scale and no asymptote to its slope.

In my experience, the most successful people I know are much more conscious of their own personal asymptotes at a much earlier age than others. They ruthlessly and expediently flush them out. One successful person I know determined in grade school that she'd never be a world-class tennis player or pianist. Another mentioned to me how, in their freshman year of college, they realized they'd never be the best mathematician in their own dorm, let alone in the world. Another knew a year into a job that he wouldn't be the best programmer at his company and so he switched over into management; he rose to become CEO.

By discovering their own limitations early, they are also quicker to discover vectors on which they're personally unbounded. Product development will always be a multi-dimensional problem, often frustratingly so, but the value of reducing that dimensionality often costs so little that it should be more widely employed.

This isn't to say a person needs to aspire to be the best at everything they do. I'm at peace with the fact that I'll likely always be a middling cook, that I won't win the Tour de France, and that I'm destined to be behind a camera and not in front of it. When it comes to business, however, and surviving in the ruthless Hobbesian jungle, where much more is winner-take-all than it once was, the idea that you can be whatever you want to be, or build whatever you want to build, is a sure path to a short, unhappy existence.

Remove the legend to become one

When I started my first job at Amazon.com, as the first analyst in the strategic planning department, I inherited the work of producing the Analytics Package. I capitalize the term because it was both a serious tool for making our business legible, and because the job of its production each month ruled my life for over a year.

Back in 1997, analytics wasn't even a real word. I know because I tried to look up the term, hoping to clarify just I was meant to be doing, and I couldn't find it, not in the dictionary, not on the internet. You can age yourself by the volume of search results the average search engine returned when you first began using the internet in force. I remember when pockets of wisdom were hidden in eclectic newsgroups, when Yahoo organized a directory of the web by hand, and later when many Google searches returned very little, if not nothing. Back then, if Russians wanted to hack an election, they might have planted some stories somewhere in rec.arts.comics and radicalized a few nerds, but that's about it.

Though I couldn't find a definition of the word, it wasn't difficult to guess what it was. Some noun form of analysis. More than that, the Analytics Package itself was self-describing. Literally. It came with a single page cover letter, always with a short preamble describing its purpose, and then jumped into a textual summary of the information within, almost like a research paper abstract, or a Letter to Shareholders. I like to think Jeff Bezos' famous company policy, instituted many years later, banning Powerpoint in favor of written essays, had some origins in the Analytics Package cover letter way back when. The animating idea was the same: if you can't explain something in writing to another human, do you really understand it yourself?

My interview loop at Amazon ended with an hour with the head of recruiting at the time, Ryan Sawyer. After having gone through a gauntlet of interviews that included almost all the senior executives, and including people like Jeff Bezos and Joy Covey, some of the most brilliant people I've ever met in my life, I thought perhaps the requisite HR interview would be a letup. But then Ryan asked me to explain the most complex thing I understood in a way he'd understand. It would be good preparation for my job.

What was within the Analytics Package, that required a written explanation? Graphs. Page after page of graphs, on every aspect of Amazon's business. Revenue. Editorial. Marketing. Operations. Customer Service. Headcount. G&A. Customer sentiment. Market penetration. Lifetime value of a customer. Inventory turns. Usually four graphs to a page, laid out landscape.

The word Package might seem redundant if Analytics is itself a noun. But if you saw one of these, you knew why it was called a Package. When I started at Amazon in 1997, the Analytics Package was maybe thirty to forty pages of graphs. When I moved over to product management, over a year later, it was pushing a hundred pages, and I was working on a supplemental report on customer order trends in addition. Analytics might refer to a deliverable or the practice of analysis, but the Analytics Package was like the phone book, or the Restoration Hardware catalog, in its heft.

This was back in the days before entire companies focused on building internal dashboards and analytical tools, so the Analytics Package was done with what we might today consider as comparable to twigs and dirt in sophistication. I entered the data by hand into Excel tables, generated and laid out the charts in Excel, and printed the paper copies.

One of the worst parts of the whole endeavor was getting the page numbers in the entire package correct. Behind the Analytics Package was a whole folder of linked spreadsheets. Since different charts came from different workbooks, I had to print out an entire Analytics Package, get the ordering correct, then insert page numbers by hand in some obscure print settings menu. Needless to say, ensuring page breaks landed where you wanted them was like defusing a bomb.

Nowadays, companies hang flat screen TVs hanging on the walls, all them running 24/7 to display a variety of charts. Most everyone ignores them. The spirit is right, to be transparent all the time, but the understanding of human nature is not. We ignore things that are shown to us all the time. However, if once a month, a huge packet of charts dropped on your desk, with a cover letter summarizing the results, and if the CEO and your peers received the same package the same day, and that piece of work included charts on how your part of the business was running, you damn well paid attention, like any person turning to the index of a book on their company to see if they were mentioned. Ritual matters.

The package went to senior managers around the company. At first that was defined by your official level in the hierarchy, though, as most such things go, it became a source of monthly contention as to who to add to the distribution. One might suspect this went to my head, owning the distribution list, but in fact I only cared because I had to print and collate the physical copies every month.

I rarely use copy machines these days, but that year of my life I used them more than I will all the days that came before and all the days still to come, and so I can say with some confidence that they are among the least reliable machines ever made by mankind.

It was a game, one whose only goal was to minimize pain. A hundred copies of a hundred page document. The machine will break down at some point. A sheet will jam somewhere. The ink cartridge will go dry. How many collated copies do you risk printing at once? Too few and you have to go through the setup process again. Too many and you risk a mid-job error, which then might cascade into a series of ever more complex tasks, like trying to collate just the pages still remaining and then merging them with the pages that were already completed. [If you wondered why I had to insert page numbers by hand, it wasn't just for ease of referencing particular graphs in discussion; it was also so I could figure out which pages were missing from which copies when the copy machine crapped out.]

You could try just resuming the task after clearing the paper jam, but in practice it never really worked. I learned that copy machine jams on jobs of this magnitude were, for all practical purposes, failures from which the machine could not recover.

I became a shaman to all the copy machines in our headquarters at the Columbia building. I knew which ones were capable of this heavy duty task, how reliable each one was. Each machine's reliability fluctuated through some elusive alchemy of time and usage and date of the last service visit. Since I generally worked late into every night, I'd save the mass copy tasks for the end of my day, when I had the run of all the building's copy machines.

Sometimes I could sense a paper jam coming just by the sound of machine's internal rollers and gears. An unhealthy machine would wheeze, like a smoker, and sometimes I'd put my hands on a machine as it performed its service for me, like a healer laying hands on a sick patient. I would call myself a copy machine whisperer, but when I addressed them it was always a slew of expletives, never whispered. Late in my tenure as analyst, I got budget to hire a temp to help with the actual printing of the monthly Analytics Package, and we keep in touch to this date, bonded by having endured that Sisyphean labor.

My other source of grief was another tool of deep fragility: linked spreadsheets in Excel 97. I am, to this day, an advocate for Excel, the best tool in the Microsoft Office suite, and still, if you're doing serious work, the top spreadsheet on the planet. However, I'll never forget the nightmare of linked workbooks in Excel 97, an idea which sounded so promising in theory and worked so inconsistently in practice.

Why not just use one giant workbook? Various departments had to submit data for different graphs, and back then it was a complete mess to have multiple people work in the same Excel spreadsheet simultaneously. Figuring out whose changes stuck, that whole process of diffs, was untenable. So I created Excel workbooks for all the different departments. Some of the data I'd collect myself and enter by hand, while some departments had younger employees with the time and wherewithal to enter and maintain the data for their organization.

Even with that process, much could go wrong. While I tried to create guardrails to preserve formulas linking all the workbooks, everything from locked cells to bold and colorful formatting to indicate editable cells, no spreadsheet survives engagement with a casual user. Someone might insert a column here or a row there, or delete a formula by mistake. One month, a user might rename a sheet, or decide to add a summary column by quarter where none had existed before. Suddenly a slew of #ERROR's show up in cells all over the place, or if you're unlucky, the figures remain, but they're wrong and you don't realize it.

Thus some part of every month was going through each spreadsheet and fixing all the links and pointers, reconnecting charts that were searching for a table that was no longer there, or more insidiously, that were pointing to the wrong area of the right table.

Even after all that was done, though, sometimes the cells would not calculate correctly. This should have been deterministic. That's the whole idea of a spreadsheet, that the only error should be user error. A cell in my master workbook would point at a cell in another workbook. They should match in value. Yet, when I opened both workbooks up, one would display 1,345 while the other would display 1,298. The button to force a recalculation of every cell was F9. I'd press it repeatedly. Sometimes that would do it. Sometimes it wouldn't. Sometimes I'd try Ctrl - Alt - Shift - F9. Sometimes I'd pray.

One of the only times I cried at work was late one night, a short time after my mom had passed away from cancer, my left leg in a cast from an ACL/MCL rupture, when I could not understand why my workbooks weren't checking out, and I lost the will, for a moment, to wrestle it and the universe into submission. This wasn't a circular reference, which I knew could be fixed once I pursued it to the ends of the earth, or at least the bounds of the workbook. No, this inherent fragility in linked workbooks in Excel 97 was a random flaw in a godless program, and I felt I was likely the person in the entire universe most fated to suffer its arbitrary punishment.

I wanted to leave the office, but I was too tired to go far on my crutches. No one was around the that section of the office at at that hour. I turned off the computer, turned out the lights, put my head down on my desk for a while until the moment passed. Then I booted the PC back up, opened the two workbooks, and looked at the two cells in question. They still differed. I pressed F9. They matched. 

Most months, after I had finished collating all the copies of the Analytics Package, clipping each with a small, then later medium, and finally a large binder clip, I'd deliver most copies by hand, dropping them on each recipient's empty desk late at night. It was a welcome break to get up from my desk and stroll through the offices, maybe stop to chat with whoever was burning the midnight oil. I felt like a paper boy on his route, and often we'd be up at the same hour.

For all the painful memories that cling to the Analytics Package, I consider it one of the formative experiences of my career. In producing it, I felt the entire organism of our business laid bare before me, its complexity and inner working made legible. The same way I imagine programmers visualizing data moving through tables in three dimensional space, I could trace the entire ripple out from a customer's desire to purchase a book, how a dollar of cash flowed through the entire anatomy of our business. I knew the salary of every employee, and could sense the cost of their time from each order as the book worked its way from a distributor to our warehouse, from a shelf to a conveyor belt, into a box, then into a delivery truck. I could predict, like a blackjack player counting cards in the shoe, what % of customers from every hundred orders would reach out to us with an issue, and what % of those would be about what types of issues.

I knew, if we gained a customer one month, how many of their friends and family would become new customers the next month, through word of mouth. I knew if a hundred customers made their first order in January of 1998, what % of them would order again in February, and March, and so on, and what the average basket size of each order would be. As we grew, and as we gained some leverage, I could see the impact on our cash flow from negotiating longer payable days with publishers and distributors, and I'd see our gross margins inch upwards every time we negotiated better discounts off of list prices.

What comfort to live in the realm of frequent transactions and normal distributions, a realm where the laws of large numbers was the rule of law. Observing the consistency and predictability of human purchases of books (and later CDs and DVDs) each month was like spotting some crystal structure in Nature under a microscope. I don't envy companies like Snapchat or Twitter or Pinterest, social networks who have gone public or likely have to someday, companies who play in the social network business, trying to manage investor expectations when their businesses are so large and yet still so volatile, their revenue streams even more so. It is fun to grow with the exponential trajectory of a social network, but not fun if you're Twitter trying to explain every quarter why you missed numbers again, and less fun when you have to pretend to know what will happen to revenue one quarter out, let alone two or three.

At Amazon, I could see our revenue next quarter to within a few percentage points of accuracy, and beyond. The only decision was how much to tell Wall Street we anticipated our revenue being. Back then, we always underpromised on revenue; we knew we'd overdeliver, the only question was how much we should do so and still maintain a credible sense of surprise on the next earnings call.

The depth of our knowledge of our own business continues to exceed that of any company I've worked at since. Much of the credit goes to Jeff for demanding that level of detail. No one can set a standard for accountability like the person at the top. Much credit goes to Joy and my manager Keith for making the Analytics Package one of the strategic planning department's central tasks. That Keith pushed me into the arms of Tufte changed everything. And still more credit belongs to all the people who helped gather obscure bits of data from all parts of the business, from my colleagues in accounting to those in every department in the company, many of whom built their own models for their own areas, maintaining and iterating them with a regular cadence because they knew every month I'd come knocking and asking questions.

I'm convinced that because Joy knew every part of our business as well or better than almost anyone running them, she was one of those rare CFO's that can play offense in addition to defense. Almost every other CFO I've met hews close to the stereotype; always reigning in spending, urging more fiscal conservatism, casting a skeptical eye on any bold financial transactions. Joy could do that better than the next CFO, but when appropriate she would urge us to spend more with a zeal that matched Jeff's. She, like many visionary CEO's, knew that sometimes the best defense is offense, especially when it comes to internet markets, with their pockets of winner-take-all contests, first mover advantages, and network effects.

It still surprises me how many companies don't help their employees understand the numeric workings of their business. One goes through orientation and hears about culture, travel policies, where the supply cabinet is, maybe some discussion of mission statements. All valuable, of course. But when was the last time any orientation featured any graphs on the business? Is it that we don't trust the numeracy of our employees? Do we fear that level of radical transparency will overwhelm them? Or perhaps it's a mechanism of control, a sort of "don't worry your little mind about the numbers" and just focus on your piece of the puzzle?

Knowing the numbers isn't enough in and of itself, but as books like Moneyball make clear, doing so can reveal hidden truths, unknown vectors of value (for example, in the case of Billy Beane and the Oakland A's, on base percentage). To this day, people still commonly talk about Amazon not being able to turn a profit for so many years as if it is some Ponzi scheme. Late one night in 1997, a few days after I had started, and about my third or fourth time reading the most recent edition of the Analytics Package cover to back, I knew our hidden truth: all the naysaying about Amazon's profitless business model was a lie. Every dollar of our profit we didn't reinvest into the business, and every dollar we didn't raise from investors to add to that investment, would be just kneecapping ourselves. The only governor of our potential was the breadth of our ambition.

***

What does this have to do with line graphs? A month or two into my job, my manager sent me to a seminar that passed through Seattle. It was a full day course centered around the wisdom in one book, taught by the author. The book was The Visual Display of Quantitative Information, a cult bestseller on Amazon.com, the type of long tail book that, in the age before Amazon, might have remained some niche reference book, and the author was Edward Tufte. It's difficult to conjure, on demand, a full list of the most important books I've read, but this is one.

My manager sent me to the seminar so I could apply the principles of that book to the charts in the Analytics Package. My copy of the book sits on my shelf at home, and it's the book I recommend most to work colleagues.

In contrast to this post, which has buried the lede so far you may never find it, Tufte's book opens with a concise summary of its key principles.

Excellence in statistical graphics consists of complex ideas communicated with clarity, precision, and efficiency. Graphics displays should

  • show the data
  • induce the viewer to think about the substance rather than about methodology, graphic design, the technology of graphic production, or something else
  • avoid distorting what the data have to say
  • present many numbers in a small space
  • make large data sets coherent
  • encourage the eye to compare different pieces of data
  • reveal the data at several levels of detail, from a broad overview to the fine structure
  • serve a reasonably clear purpose: description, exploration, tabulation, or decoration
  • be closely integrated with the statistical and verbal descriptions of a data set.

Graphics reveal data. Indeed graphics can be more precise and revealing than conventional statistical computations.

That's it. The rest of the book is just one beautiful elaboration after another of those first principles. The world in one page.

Of all the graphs, the line graph is the greatest. Of its many forms, the most iconic form, the one I used the most in the Analytics Package, has time as the x-axis and the dimension to be measured as the y-axis. Data trended across time.

One data point is one data point. Two data points, trended across time, tell a story. [I'm joking, please don't tell a story using just two data points] The line graph tells us where we've been, and it points to where things are going. In contemplating why the line points up or down, or why it is flat, one grapples with the fundamental mechanism of what's on study.

It wasn't until I'd produced the Analytics Package graphs for several months that my manager granted me the responsibility of writing the cover letter. It was a momentous day, but the actual task of writing the summary of the state of the business wasn't hard. By looking at each graph and investigating why each had changed in which way from last month to this, I had all the key points worth writing up. Building the graphs was more than half the battle.

So many of the principles in Tufte's book made their way into the Analytics Package. For example, where relevant, each page showed a series of small multiples, with the same scale on X and Y axes, back in the age before small multiples were a thing in spreadsheet programs.

Nowhere was Tufte's influence more felt than in our line graphs. How good can a line graph be? After all, in its components, a line graph is really simple. That's a strength, not a weakness. The advice here is simple, so simple, in fact, one might think all of it is common practice already. It isn't. When I see line graphs shared online, even those from some of the smartest people I follow, almost all of them adhere to very little of what I'm going to counsel.

Perhaps Tufte isn't well read enough, his idea not taught in institutions like business schools that require their students to use Excel. That is all true, but I prefer a simpler explanation: users are lazy, the Excel line graph defaults are poor, and Excel is the most popular charting tool on the planet.

By way of illustration, let's take a data set, build a line graph in Excel, and walk through some of what I had to do when making the Analytics Package each month.

I couldn't find the raw data behind most charts shared online, and I didn't want to use any proprietary data. My friend Dan Wang pointed me at the Google Public Data Explorer, a lot of which seems to built off the World Bank Data Catalog, from which I pulled some raw data, just to save me the time of making up figures.

I used health expenditure per capita (current US$). I picked eight countries and used the data for the full range of years available, spanning 1995-2014. I chose a mix of countries I've visited or lived in, plus some people have spoken to me about in reference to their healthcare systems, but the important point here is that limiting the data series on a line graph matters if the graph is going to be readable. How many data series depends on what you want to study and how closely the lines cluster, how large the spread is. Sometimes it's hard to anticipate until you produce the graph, but suffice it to say that generating a graph just to make one is silly if the result is illegible.

Here's what the latest version of Excel on my Mac produced when I pressed the line graph button after highlighting the data (oddly enough, I found a Recommended Charts dropdown button and the three graphs it recommended were three bar graphs of a variety of forms, definitely not the right choice here, among many places where Excel's default logic is poor). I didn't do anything to this graph, just saved it directly as is, at the exact size and formatting Excel selected.

Not great. Applying Richard Thaler and Cass Sunstein's philosophy from Nudge, if we just improved the defaults in Excel and Powerpoint, the graphic excellence the world over would improve by leaps and bounds. If someone out there works on the charting features in Excel and Powerpoint, hear my cries! The power to elevate your common man is in your hands. Please read Tufte.

As an aside, after the Tufte seminar, I walked up to him and asked what software he used for the graphics in his book. His response? Adobe Illustrator. To produce the results he wanted, he, and presumably his assistants, laid out every pixel by hand. Not that helpful for me in producing the Analytics Package monthly on top of my other job duties, but a comment on the charting quality in Excel that rings true even today.

Let's take my chart above and start editing it for the better, as I did back in my Analytics Package days. Let's start with some obvious problems:

  • The legend is nearly the same height as the graph itself
  • A lot of the lines are really close to each other
  • The figures in the left column could be made more readable with thousands comma separators
  • The chart needs a title

I expanded the graph inside the worksheet to make it easier to see, it was the size of about four postage stamps in the sheet for some reason, and fixed the problems above. Here's that modified version.

Excel should add comma separators for thousands by default. The graph is somewhat better, but the labels are still really small, even if you click and expand the photo above to full size. In addition to adjusting the scale of labels and title, however, what else can we do to improve matters?

I began this post just wanting to share the following simple point, the easiest way to upgrade your Excel line graph:

Remove the legend.

That alone will make your line graph so much better that if it's the only thing you remember for the rest of your life, a generation of audiences will thank you.

The problem with a legend is that it asks the user to bounce their eyes back and forth from the graph to the legend, over and over, trying to hold what is usually some color coding system in their short-term memory.

Look at the chart above. Every time I have to see which line is which country, I have look down to the legend and then back to the graph. If I decide to compare any two data series, I have to look back down and memorize two colors, then look back at the chart. Forget even trying to do it for three countries, or for all of them, which is the whole point of the line graph. In forcing the viewer to interpret your legend, your line graph has already dampened much of its explanatory efficiency.

If you have just two data series, a legend isn't egregious, but it is still inferior to removing the legend. Of course, removing the legend isn't enough.

Remove the legend and label the data series directly on the plot.

Unfortunately, here is where your work gets harder, because, much to my disbelief, there is no option in Excel to label data series in place in a line graph. The only automated option is to use the legend.

If I am wrong, and I would love nothing more than to be wrong, please let me know, but I tried going through every one of Excel's various chart menus, which can be brought up by right clicking on different hot spots on the chart, and couldn't find this option. That Excel forces you to right click on so many obscure hot spots just to bring up various options is bad enough. That you can't find this option at all among the dozens of other useless options is a travesty.

The only fix is to create data series labels by hand. You can find an Insert Text Box option somewhere in the Excel menus and ribbons and bars, so I'll make one for each data series and put them roughly in place so you know which data series is which. Next, the moment of truth.

Select the legend. And delete it.

Undo, then delete it again, just to feel the rush as your chart now expands in size to fill the white space left behind. Feels good.

Next, shrink the actual plot area of the graph by selecting it and opening up some margin on the right side of the graph for placing your labels if there isn't enough. Since people read time series left to right, and since the most recent data is on the far right, you'll want your labels to be there, where the viewers' eyes will flow naturally.

Don't move the labels into exact position just yet. First adjust the sizing of the data labels of the axes and the scale of the graph first. Unfortunately, since these text boxes are floating and not attached to the data series, every time the scale of your chart changes, you have to reposition all the data series labels by hand. So do that last.

I have not used this latest version of Excel before, the charting options seem even more complex than before. To change the format of the labels on the x and y-axis, you right click each axis and select Format Axis. I changed the y-axis text format to currency. But to change the size of the labels on each axis, you have to right click each and then select Font. That those are in separate menus is part of the Excel experience.

In expanding the font size of the x-axis, I decided it was too crowded so I went with every other year. I left aligned the data series labels and tried to position them as precisely as possible by eye. I seem to remember Excel used to allow selecting text boxes and moving them one pixel at a time with the arrow keys, but it didn't work for me, so you may have to find the object alignment dropdown somewhere and select align left for all your labels.

Here's the next iteration of the chart.

You can click on it to see it larger. Already, we're better off than the Excel auto-generated chart by quite a margin. If this were the default, I'd be fairly happy. But there's room for improvement.

The use of color can be helpful, especially with lines that are closely stacked, but what about the color blind? If we were to stick with the coloring scheme, I might change the data series labels to match the color of each line. Again, since the labels are added by hand, you'd have to manually change each label by hand to match the color scheme Excel had selected, and again, it wouldn't fix the issue for color blind viewers. [I didn't have the patience to do this for illustrative purposes, but you can see how matching the coloring of labels to the lines helps if you view this data in Google Data Explorer.]

In The Visual Display of Quantitative Information, Tufte uses very little color. When producing the Analytics Package, I was working with black and white printers and copy machines. Color was a no go, even if it provides an added dimension for your graphics, as for elevation on maps.

While color has the advantage of making it easier to distinguish between two lines which are close to each other, it introduces all sorts of mental associations that are difficult to anticipate and which may just be a distraction. When making a chart like, say, one of the U.S. Presidential Election, using blue for Democrats and red for Republicans is a good idea since the color scheme is widely agreed upon. When distinguishing between departments in your company, or product lines, arbitrary color choices can be noise, or worse, a source of contention (NSFW language warning).

The safer alternative is to use different line styles, regardless of whether your final deliverable is capable of displaying color. Depending on how many data series you have to chart, that may or may not be an option. I looked at the data series line format options, which are labeled Dash Type in this version of Excel, and found a total of eight options, or just enough for my example chart. It takes some work to assign options for maximum legibility; you should which country receives which style based on maximum contrast between lines that cluster.

After a random pass at that, the monochrome line graph looked like this.

No issues for color blind users, but we're stretching the limits of line styles past where I'm comfortable. To me, it's somewhat easier with the colored lines above to trace different countries across time versus each other, though this monochrome version isn't terrible. Still, this chart reminds me, in many ways, of the monochromatic look of my old Amazon Analytics Package, though it is missing data labels (wouldn't fit here) and has horizontal gridlines (mine never did).

We're running into some of these tradeoffs because of the sheer number of data series in play. Eight is not just enough, it is probably too many. Past some number of data series, it's often easier and cleaner to display these as a series of small multiples. It all depends on the goal and what you're trying to communicate.

At some point, no set of principles is one size fits all, and as the communicator you have to make some subjective judgments. For example, at Amazon, I knew that Joy wanted to see the data values marked on the graph, whenever they could be displayed. She was that detail-oriented. Once I included data values, gridlines were repetitive, and y-axis labels could be reduced in number as well.

Tufte advocates reducing non-data-ink, within reason, and gridlines are often just that. In some cases, if data values aren't possible to fit onto a line graph, I sometimes include gridlines to allow for easy calculation of the relative ratio of one value to another (simply count gridlines between the values), but that's an edge case.

For sharp changes, like an anomalous reversal in the slope of a line graph, I often inserted a note directly on the graph, to anticipate and head off any viewer questions. For example, in the graph above, if fewer data series were included, but Greece remained, one might wish to explain the decline in health expenditures starting in 2008 by adding a note in the plot area near that data point, noting the beginning of the Greek financial crisis (I don't know if that's the actual cause, but whatever the reason or theory, I'd place it there).

If we had company targets for a specific metric, I'd note those on the chart(s) in question as a labeled asymptote. You can never remind people of goals often enough.

Just as an example, here's another version of that chart, with fewer data series, data labels, no gridlines, fewer y-axis labels. Also, since the lines aren't clustered together, we no longer need different line styles adding visual noise.

At that size, the data values aren't really readable, but if I were making a chart for Joy or Jeff, I'd definitely add the labels because I knew they'd want that level of detail. At Amazon, also, I typically limited our charts to rolling four or eight quarters, so we'd never have this many data points as on the graph above. Again, at some point you have to determine your audience and your goals and modify your chart to match.

Like a movie, work on a chart is a continuous process. I could generate a couple more iterations on the chart above for different purposes, but you get the idea. At some point you have to print it. Just as you'd add the end credits to a film, the last touch here would be to put a source for the data below the graph, so people can follow up on the raw data themselves.

Before I set off on this exercise, I didn't know much about health care expenditures per capita around the world, except that the United States is the world leader by a wide margin. The graph reveals that, and by what magnitude. Look at China by comparison. What explains China's low expenditures? I might hypothesize a number of reasons, including obvious ones like the huge population there, but it would take further investigation, and perhaps more charts. One reason the Analytics Package grew in time was that some charts beget further charts.

Why did Greece's expenditures per capita go into decline starting in 2008. Was it the financial crisis? Why has Japan reversed its upward trajectory starting in 2012? Should we include some other countries for comparison, and how might we choose the most illuminating set?

Every month that first year at Amazon, I'd spend most my waking hours gathering figures and confirming their accuracy, producing these graphs, and then puzzling over the stories behind their contours. The process of making line graphs was prelude to understanding.

To accelerate that understanding, upgrade your line graphs to be efficient and truthful. Some broadly applicable principles should guide you to the right neighborhood. To recap:

  • Don't include a legend; instead, label data series directly in the plot area. Usually labels to the right of the most recent data point are best. Some people argue that a legend is okay if you have more than one data series. My belief is that they're never needed on any well-constructed line graph.
  • Use thousands comma separators to make large figures easier to read
  • Related to that, never include more precision than is needed in data labels. For example, Excel often chooses two decimal places for currency formats, but most line graphs don't need that, and often you can round to 000's or millions to reduce data label size. If you're measuring figures in the billions and trillions, we don't need to see all those zeroes, in fact it makes it harder to read.
  • Format axis labels to match the format of the figures being measured; if it's US dollars, for example, format the labels as currency.
  • Look at the spacing of axis labels and increase the interval if they are too crowded. As Tufte counsels, always reduce non-data-ink as much as possible without losing communicative power.
  • Start your y-axis at zero (assuming you don't have negative values)
  • Try not to have too many data series; five to eight seems the usual limit, depending on how closely the lines cluster. On rare occasion, it's fine to exceed this; sometimes the sheer volume of data series is the point, to show a bunch of lines clustered. These are edge cases for a reason, however.
  • If you have too many data series, consider using small multiples if the situation warrants, for example if the y-axes can match in scale across all the multiples.
  • Respect color blind users and those who may not be able to see your charts with color, for example on a black and white printout, and have options for distinguishing data series beyond color, like line styles. At Amazon, as I dealt with so many figures, I always formatted negative numbers to be red and enclosed in parentheses for those who wouldn't see the figures in color.
  • Include explanations for anomalous events directly on the graph; you may not always be there in person to explain your chart if it travels to other audiences.
  • Always note, usually below the graph, the source for the data.

Some other suggestions which are sometimes applicable:

  • Display actual data values on the graph if people are just going to ask what the figures are anyway, and if they fit cleanly. If you include data labels, gridlines may not be needed. In fact, they may not be needed even if you don't include data labels.
  • Include targets for figures as asymptotes to help audiences see if you're on track to reach them.

Why is The Visual Display of Quantitative Information such a formative text in my life? If it were merely a groundbreaking book on graphic excellence, it would remain one of my trusted references, sitting next to Garner's Modern American Usage, always within arm's reach. It wouldn't be a book I would push on those who never make graphs and charts.

The reason the book influenced me so deeply is that it is actually a book about the pursuit of truth through knowledge. It is ostensibly about producing better charts; what stays with you is the principles for general clarity of thought. Reading the book, chiseling away at my line graphs late nights, talking to people all over the company to understand what might explain each of them, gave me a path towards explaining the past and predicting the future. Ask anyone about any work of art they love, whether it's a book or a movie or an album, and it's never just about what it's about. I haven't read Zen and the Art of Motorcycle Maintenance; I'm guessing it wasn't written just for motorcycle enthusiasts.

A good line graph is a fusion of right and left brain, of literacy and numeracy. Just numbers alone aren't enough to explain the truth, but accurate numbers, represented truthfully, are a check on our anecdotal excesses, confirmation biases, tribal affiliations.

I'm reminded of Tufte's book whenever I brush against tendrils of many movements experiencing a moment online: rationalism, the Nate Silver/538 school of statistics-backed journalism, infographics, UX/UI/graphic design, pop economics, big history. And, much to my dismay, I'm reminded of the book most every time I see a line graph that could use some visual editing. Most people are lazy, most people use the defaults, and the defaults of the most popular charting application on the planet, Excel, are poor.

[Some out there may ask about Apple's Numbers. I tried it a bit, and while it's aesthetically cleaner than Excel, it's such a weak spreadsheet overall that I couldn't make the switch. I dropped Powerpoint for Keynote, though both have some advantages. Neither, unfortunately, includes a great charting tool, though they are simpler in function than the one in Excel. Google Sheets is, like Numbers, a really weak spreadsheet, and it's just plain ugly. If someone out there knows of a superior charting tool, one that doesn't require making charts in Illustrator like Tufte does, please let me know.] 

I love this exchange early on in Batman Begins between Liam Neeson R'as Al Ghul (though he was undercover as Henri Ducard at the time) and Christian Bale's Bruce Wayne.

Bruce Wayne: You're vigilantes.
 
Henri Ducard: No, no, no. A vigilante is just a man lost in the scramble for his own gratification. He can be destroyed, or locked up. But if you make yourself more than just a man, if you devote yourself to an ideal, and if they can't stop you, then you become something else entirely.
 
Bruce Wayne: Which is?
 
Henri Ducard: Legend, Mr. Wayne.
 

It is absurdly self-serious in the way that nerds love their icons to be treated by mainstream pop culture, and I love it for its broad applicability. I've been known to drop some version of it in meetings all the time, my own Rickroll, but no one seems to find it amusing.

In this case, the passage needs some tweaking. But please do still read it with Liam Neeson's trademark gravitas.

A line graph is just another ugly chart lost in the scramble for its own gratification in a slide deck no one wants to read. It can be disregarded, forgotten. But if you make your graph more than just the default Excel format, if you devote yourself to Tufte's ideals, then your graph becomes something else entirely.

Which is?

A line graph without a legend. Remove the legend, Mr. Wayne, and become a legend.

My most popular posts

I recently started collecting email addresses using MailChimp for those readers who want to receive email updates when I post here. Given my relatively low frequency of posts these days, especially compared to my heyday when I posted almost daily, and given the death of RSS, such an email list may have more value than it once did. You can sign up for that list from my About page.

I've yet to send an email to the list successfully yet, but let's hope this post will be the first to go out that route. Given this would be the first post to that list, with perhaps some new readers, I thought it would be worth compiling some of my more popular posts in one place.

Determining what those are proved difficult, however. I never checked my analytics before, since this is just a hobby, and I realized when I went to the popular content panel on Squarespace that their data only goes back a month. I also don't have data from the Blogger or Movable Type eras of my blog stashed anywhere, and I never hooked up Google Analytics here.

A month's worth of data was better than nothing, as some of the more popular posts still get a noticeable flow of traffic each month, at least by my modest standards. I also ran a search on Twitter for my URL and used that as a proxy for social media popularity of my posts (and in the process, found some mentions I'd never seen before since they didn't include my Twitter handle; is there a way on Twitter to get a notification every time your domain is referenced?).

In compiling the list, I went back and reread these posts for the first time in ages added a few thoughts on each.

  • Compress to Impress — my most recent post is the one that probably attracted most of the recent subscribers to my mailing list. I regret not including one of the most famous cinematic examples of rhetorical compression, from The Social Network, when Justin Timberlake's Sean Parker tells Jesse Eisenberg, "Drop the "The." Just Facebook. It's cleaner." Like much of the movie, probably made up (and also, why wasn't the movie titled just Social Network?), but still a good example how movies almost always compress the information to be visually compact scenes. The reason people tend to like the book better than the movie adaptation in almost every case is that, like Jeff Bezos and his dislike of Powerpoint, people who see both original and compressed information flows feel condescended and lied to by the latter. On the other hand, I could only make it through one and a half of the Game of Thrones novels so I much prefer the TV show's compression of that story, even as I watch every episode with super fans who can spend hours explaining what I've missed, so it feels like I have read the books after all.
  • Amazon, Apple, and the beauty of low margins — one of the great things about Apple is it attracts many strong, independent critics online (one of my favorites being John Siracusa). The other of the FAMGA tech giants (Facebook, Amazon, Microsoft, Google) don't seem to have as many dedicated fans/analysts/critics online. Perhaps it was that void that helped this post on Amazon from 2012 to go broad (again, by my modest standards). Being able to operate with low margins is not, in and of itself, enough to be a moat. Anyone can lower their prices, and more generally, any company should be wary of imitating any company's high variance strategy, lest they forget all the others who did and went extinct (i.e., a unicorn is a unicorn because it's a unicorn, right?). Being able to operate with low margins with unparalleled operational efficiency, at massive scale globally, while delivering more SKUs in more shipments with more reliability and greater speed than any other retailer is a competitive moat. Not much has changed, by the way. Apple just entered the home voice-controlled speaker market with its announcement of the HomePod and is coming in from above, as expected, at $349, as the room under Amazon's price umbrella isn't attractive.
  • Amazon and the profitless business model fallacy — the second of my posts on Amazon to get a traffic spike. It's amusing to read some of the user comments on this piece and recall a time when every time I said anything positive about Amazon I'd be inundated with comments from Amazon shorts and haters. Which is the point of the post, that people outside of Amazon really misunderstood the business model. The skeptics have largely quieted down nowadays, and maybe the shorts lost so much money that they finally went in search of weaker prey, but in some ways I don't blame the naysayers. Much of their misreading of Amazon is the result of GAAP rules which really don't reveal enough to discern how much of a company's losses are due to investments in future businesses or just aggressive depreciation of assets. GAAP rules leave a lot of wiggle room to manipulate your numbers to mask underlying profitability, especially when you have a broad portfolio of businesses munged together into single line items on the income statement and balance sheet. This doesn't absolve professional analysts who should know better than to ignore unit economics, however. Deep economic analysis isn't a strength of your typical tech beat reporter, which may explain the rise of tech pundits who can fill that gap. I concluded the post by saying that Amazon's string of quarterly losses at the time should worry its competitors more than it should assure them. That seems to have come to fruition. Amazon went through a long transition period from having a few very large fulfillment centers to having many many more smaller ones distributed more broadly, but generally located near major metropolitan areas, to improve its ability to ship to customers more quickly and cheaply. Now that the shift has been completed for much of the U.S., you're seeing the power of the fully operational Death Star, or many tiny ones, so to speak.
  • Facebook hosting doesn't change things, the world already changed — the title feels clunky, but the analysis still holds up. I got beat up by some journalists over this piece for offering a banal recommendation for their malady (focus on offering differentiated content), but if the problem were so tractable it wouldn't be a problem.
  • The network's the thing — this is from 2015, and two things come to mind since I wrote it.
    • As back then, Instagram has continued to evolve and grow, and Twitter largely has not and has not. Twitter did stop counting user handles against character limits and tried to alter its conversation UI to be more comprehensible, but the UI's still inscrutable to most. The biggest change, to an algorithmic rather than reverse chronological timeline, was an improvement, but of course Instagram had beat them to that move as well. The broader point is still that the strength of any network lies most in the composition of its network, and in that, Twitter and other networks that have seened flattening growth, like Snapchat or Pinterest, can take solace. Twitter is the social network for infovores like journalists, technorati, academics, and intellectual introverts, and that's a unique and influential group. Snapchat has great market share among U.S. millennials and teens, Pinterest among women. It may be hard for them to break out of those audiences, but those are wonderfully differentiated audiences, and it's also not easy for a giant like Facebook to cater to particular audiences when its network is so massive. Network scaling requires that a network reduce the surface area of its network to each individual user using strategies like algorithmic timelines, graph subdivision (e.g., subreddits), and personalization, otherwise networks run into reverse economies of scale in their user experience.
    • The other point that this post recalls is the danger of relying on any feature as a network moat. People give Instagram, Messenger, FB, and WhatsApp grief for copying Stories from Snapchat, but if any social network has to pin its future on any single feature, all of which are trivial to replicate in this software age, that company has a dim future. The differentiator for a network is how its network uses a features to strengthen the bonds of that network, not the feature itself. Be wary of hanging your hat on an overnight success of a feature the same way predators should be wary of mutations that offer temporary advantages over their prey. The Red Queen effect is real and relentless.
  • Tower of Babel — From earlier this year, and written at a time when I was quite depressed about a reversal in the quality of discourse online, and how the promise of connecting everyone via the internet had quickly seemed to lead us all into a local maximum (minimum?) of public interaction. I'm still bullish on the future, but when the utopian dreams of global connection run into the reality of human's coalitional instincts and the resentment from global inequality, we've seen which is the more immovable object. Perhaps nothing expresses the state of modern discourse like waking up to see so many of my followers posting snarky responses to one of Trump's tweets. Feels good, accomplishes nothing, let's all settle for the catharsis of value signaling. I've been guilty of this, and we can do better.
  • Thermodynamic theory of evolution — actually, this isn't one of my most popular posts, but I'm obsessed with the second law of thermodynamics and exceptions to it in the universe. Modeling the world as information feels like something from the Matrix but it has reinvigorated my interest in the physical universe.
  • Cuisine and empire — on the elevation of food as scarce cultural signal over music. I'll always remember this post because Tyler Cowen linked to it from Marginal Revolution. Signalling theory is perhaps one of the three most influential ideas to have changed my thinking in the past decade. I would not underestimate its explanatory power in the rise of Tesla. Elon Musk and team made the first car that allowed wealthy people to signal their environmental values without having to also send a conflicting signal about their taste in cars. It's one example where actually driving one of the uglier, less expensive EV's probably would send the stronger signal, whereas generally the more expensive and useless a signal the more effective it is.
  • Your site has a self-describing cadence — I'm fond of this one, though Hunter Walk has done so much more to point to this post than anyone that I feel like I should grant him a perpetual license to call it his own. It still holds true, almost every service and product I use online trains me how often to return. The only unpleasant part of rereading this is realizing how my low posting frequency has likely trained my readers to never visit my blog anymore.
  • Learning curves sloping up and down — probably ranks highly only because I have such a short window of data from Squarespace to examine, but I do think that companies built for the long run have to come to maintain a sense of the slope of their organization's learning curve all the time, especially in technology where the pace of evolution and thus the frequency of existential decisions is heightened.
  • The paradox of loss aversion — more tech markets than ever are winner-take-all because the internet is the most powerful and scalable multiplier of network effects in the history of the world. Optimal strategy in winner-take-all contests differs quite a bit from much conventional business strategy, so best recognize when you're playing in one.
  • Federer and the Paradox of Skill — the paradox of skill is a term I first learned from Michael Mauboussin's great book The Success Equation. This post applied it to Roger Federer, and if he seems more at peace recently, now that he's older and more evenly matched in skill to other top players, it may be that he no longer feels subject to the outsized influence of luck as he did when he was a better player. In Silicon Valley, with all its high achieving, brilliant people, understanding the paradox of skill may be essential to feeling jealous of every random person around you who fell into a pool of money. The Paradox of Skill is a cousin to The Red Queen effect, which I referenced above and which tech workers of the Bay Area should familiarize themselves with. It explains so much of the tech sector but also just living in the Bay Area. Every week I get a Curbed newsletter, and it always has a post titled "What $X will get you in San Francisco" with a walkthrough of a recent listing that you could afford on that amount of monthly rent. Over time they've had to elevate the dollar amount just to keep things interesting, or perhaps because what $2900 can rent in you in SF was depressing its readers.

Having had this blog going off and on since 2001, I only skimmed through through a fraction of the archives, but perhaps at some point I'll cringe and crawl back further to find other pieces that still seem relevant.

Compress to impress

One of the funniest and most implausible things in movies is the grand speech by the general, usually the film's protagonist, in front of thousands of soldiers in the moments just before a critical battle. Examples abound, and the punch lines lodge in the memory, from Henry V ("We band of brothers") to Braveheart ("They will never take away...our freedom!") to Lord of the Rings: Return of the King ("There may come a day...but today is not that day!"). 

[Does this actually happen in real life? Did generals ride back and forth before the start of battles in the Civil War and give motivational speeches? I'm genuinely curious.]

The reason these scenes always strike me as absurd is that the character giving the speech is never using a megaphone or a microphone. The speech is almost always given outdoors, in the open air, so his voice carries for a radius of, what, thirty or forty feet? I imagine a soldier standing in the last row of the army about a mile away from the front lines bugging everyone around him, "What did he say? Can anyone hear?" and being shushed by everyone. Maybe only the first row or two of soldiers needs to hear the motivational speech because they're the first to run into a hail of bullets and arrows?

Even with modern communication infrastructure, however, any modern CEO deals with amplification and distortion issues with any message. Humans learn about this problem very early on by playing telephone or operator, or what I just learned is more canonically known outside the U.S. as Chinese whispers. One person whispers a message in another person's ear, and it's passed on down the line to see if the original phrase can survive intact to the last person in the chain. Generally, errors accumulate along the way and what makes it to the end is some shockingly defective copy of the original.

Despite learning this lesson early on, most people in leadership positions still underestimate just how pervasive this problem is. This is why any manager or executive is familiar with how much time they spend on communicating the same things to different groups in the organization. It feels like it's all you do sometimes, and yet you still encounter people who feel like they're in the dark.

I hadn't read Jeff Bezos' most recent letter to shareholders until today, but it was just what I'd expect of it given something I observed in my seven years there, which are now more than a decade in the rear view mirror. In fact, one of reasons I hadn't read it yet was that I suspected it would be very familiar, and it was. The other thing I suspected was that it would be really concise and memorable, and again, it was.

I suspect that very early on in his career as CEO, Jeff noticed the Chinese whispers problem as the company scaled. Anyone who is lucky enough to lead a successful company very quickly senses the impossibility of scaling one's own time to all corners of the organization, but Jeff was laser focused on the more serious problem that presented, that of maintaining consistent strategy in all important decisions, many of which were made outside his purview each day. At scale, maintaining strategic alignment feels like an organizational design problem, but much of the impact of organizational design is centered around how it impacts information flow.

This problem is made more vexing by not just the telephone game issue but by the human inability to carry around a whole lot of directives in their minds. Jeff could spend a ton of time in All Hands meetings or with his direct reports and other groups inside Amazon, explaining his thinking in excruciating detail and hoping it sank in, but then he'd never have any time to do anything else.

Thankfully, humans have developed ways to ensure the integrity of messages persists across time when transmitted through the lossy mediums of oral tradition and hierarchical organizations.

One of these is to encode you message in a very distinctive format. There are many rhetorical tricks that have stood the test of time, like alliteration or anadiplosis. Perhaps supreme among these rhetorical forms is verse, especially when it rhymes. Both the rhythm and the rhyme (alliteration intentional) allow humans to compress and recall a message with greater accuracy than prose.

Fe fi fo fum, I smell the blood of an Englishman.
 

It's thought that bards of old could recite epics like Homer's Odyssey entirely from memory because the stories were in verse form (and through the use of memorization tricks like memory palaces and visual encoding). I don't know many people who can recite any novels from memory, but I've occasionally run across someone who can recite a long poem by heart. That's the power of verse.

It might be impossible to recite The Great Gatsby by memory regardless of what heuristics you employed, but it would certainly be easier if it were written by Dr. Seuss.

I do not like them,
Sam-I-am.
I do not like
Green eggs and ham.
 

I never chatted with Bezos about this, so I don't know if it was an explicit strategy on his part, but one of his great strengths as a communicator was the ability to encode the most important strategies for Amazon in very concise and memorable forms.

Take one example "Day 1." I don't know when he first said this to the company, but it was repeated endlessly all my years at Amazon. It's still Day 1. Jeff has even named one of the Amazon buildings Day 1. In fact, I bet most of my readers know what Day 1 means, and Jeff doesn't even bother explaining what Day 1 is at the start of his letter to shareholders, so familiar is it to all followers of the company. Instead, he just jumps straight into talking about how to fend off Day 2, which he doesn't even need to define because we all can probably infer it from the structure of his formulation, but he does so anyway.

Day 2 is stasis. Followed by irrelevance. Followed by excruciating, painful decline. Followed by death. And that is why it is always Day 1.
 

An entire philosophy, packed with ideas, compressed into two words. Day 1.

He then jumps into some of the strategies to fend off Day 2. The first is also familiar to everyone at Amazon, and many outside Amazon: customer obsession. Plenty of companies say they are customer focused, but Jeff articulates why he chose it from among the many possibilities he could have chosen for the company, giving it a level of oppositional definition that would otherwise be lacking.

There are many ways to center a business. You can be competitor focused, you can be product focused, you can be technology focused, you can be business model focused, and there are more. But in my view, obsessive customer focus is by far the most protective of Day 1 vitality.
 
Why? There are many advantages to a customer-centric approach, but here’s the big one: customers are always beautifully, wonderfully dissatisfied, even when they report being happy and business is great. Even when they don’t yet know it, customers want something better, and your desire to delight customers will drive you to invent on their behalf. No customer ever asked Amazon to create the Prime membership program, but it sure turns out they wanted it, and I could give you many such examples.
 

The second strategy to ward off stagnation is a newer codification (at least to me) of a principle he hammered home in other ways when I was there: resist proxies. 

As companies get larger and more complex, there’s a tendency to manage to proxies. This comes in many shapes and sizes, and it’s dangerous, subtle, and very Day 2.
 
A common example is process as proxy. Good process serves you so you can serve customers. But if you’re not watchful, the process can become the thing. This can happen very easily in large organizations. The process becomes the proxy for the result you want. You stop looking at outcomes and just make sure you’re doing the process right. Gulp. It’s not that rare to hear a junior leader defend a bad outcome with something like, “Well, we followed the process.” A more experienced leader will use it as an opportunity to investigate and improve the process. The process is not the thing. It’s always worth asking, do we own the process or does the process own us? In a Day 2 company, you might find it’s the second.
 

There are many ways one could have named this principle, but this one is just novel and pithy enough to be distinctive, and from now on I'll likely refer to this principle as he formulated it: resist proxies.

The next principle is the one that needs the most work: embrace external trends. Doesn't really roll off the tongue, or lodge in the memory. This is also universal enough an idea that someone has likely already come up with some exceptional aphorism, some of you may have one on the tip of your tongue. It maybe that it's just too generic to be worth the effort to stake a claim to with a unique turn of phrase.

The last principle I also remember from my Amazon days: high-velocity decision making (inside of it is another popular business aphorism: disagree and commit). This could be named "ready fire aim" or "if you don't commit, you've basically quit" or "if you don't really know, just pick and go" or something like that, but "high-velocity" is distinctive in its own sense. It's an adjective that sounds more at home in physics or in describing some sort of ammunition than it does in a corporate environment, and that helps an otherwise simple principle stand out.

Go back even further, and there are dozens of examples of Bezos codifying key ideas for maximum recall. For example, every year I was at Amazon had a theme (reminiscent of how David Foster Wallace imagined in Infinite Jest that in the future corporate sponsors could buy the rights to name years). These themes were concise and memorable ways to help everyone remember the most important goal of the company that year.

One year, when our primary goal was to grow our revenue and order volume as quickly as possible to achieve the economies of scale that would capitalize on our high fixed cost infrastructure investments and put wind into our flywheel, the theme was "Get Big Fast Baby." You can argue whether the "baby" at the end was necessary, but I think it's more memorable with it than without. Much easier to remember than "Grow revenues 80%" or "achieve economies of scale" or something like that.

Another time, as we looked out and saw the $1B revenue milestone approaching, one of Jeff's chief concerns was whether our company's processes could scale to handle that volume of orders without breaking (I'll write another time about the $1B revenue scaling phenomenon). To head off any such stumbles, we set aside an entire year at the company for GOHIO. It stood for "Getting our house in order".

As the first analyst in the strategic planning group, I produced an order volume projection for $1B in revenue and also generated forecasts for other metrics of relevance for every group in the company. For example, the customer service department would have to handle a higher volume of customer contacts, and the website would have to handle a greater traffic load.

Every group had that year of GOHIO to figure out how to scale to handle that volume without just linearly scaling its headcount and/or spending. If every group were just growing their headcount and costs linearly with order volume, our business model wouldn't work. The exercise was intended to find those processes that would break at such theoretical load and begin the work of finding where the economies of scale lay. An example was building customer self-service mechanisms to offload the most common customer service inquiries like printing return labels.

I could continue on through the years, but what stands out is that I can recite these from memory even now, over a decade later, and so could probably everyone who worked at Amazon those years.

Here's a good test of how strategically aligned a company is. Walk up to anyone in the company in the hallway and ask them if they know what their current top priority or mission is. Can they recite it from memory?

What Jeff understood was the power of rhetoric. Time spent coming up with the right words to package a key concept in a memorable way was time well spent. People fret about what others say about them when they're not in the room, but Jeff was solving the issue of getting people to say what he'd say when he wasn't in the room.

It was so important to him that we even had company-wide contests to come up with the most memorable ways to name our annual themes. One year Jeff announced at an All Hands meeting that someone I knew, Barnaby Dorfman, had won the contest. Jeff said the prize was that he'd buy something off the winner's Amazon wish list, but after pulling Barnaby's wish list up in front of the whole company on the screen, he said he didn't think any of the items was good enough so instead he went over to the product page for image stabilized binoculars from Canon, retailing for over $1000, and bought those instead.

I have a list of dozens of Jeff sayings filed away in memory, and I'm not alone. It's one reason he's one of the world's most effective CEO's. What's particularly impressive is that Jeff is so brilliant that it would be easy for him to state his thinking in complex ways that us mere mortals wouldn't grok. But true genius is stating the complex simply.

Ironically, Jeff employs the reverse of this for his own information inflows. It's well known that he banned Powerpoint at Amazon because he was increasingly frustrated at the lossy nature that medium. As Edward Tufte has long railed against, Powerpoint encourage people to reduce their thinking to a series of bullet points. Whenever someone would stand up in front of Jeff to present, Jeff would have rifled through to the end of the presentation before they would've finished a handful of slides, and Jeff would just jump in and start asking questions about slide 35 when someone was still talking to slide 3.

As a hyper intelligent person, Jeff didn't want lossy compression or lazy thinking, he wanted the raw feed in a structured form, and so we all shifted to writing our arguments out as essays that he'd read silently in meetings. Written language is a lossy format, too, but it has the advantage of being less forgiving of broken logic flows than slide decks.

To summarize, Jeff's outbound feedback was carefully encoded and compressed for maximum fidelity of transmission across hundreds of thousands of employees all over the world, but his inbound data feed was raw and minimally compressed. In structure, this pattern resembles what a great designer or photographer does. Find the most elegant and stable output from a complex universe of inputs.

One of the great advantages of identifying and codifying first principles is how little maintenance they need. Write once, remember forever. As testament to that, ever year, Bezos ends his Letter to Shareholders the same way.

As always, I attach a copy of our original 1997 letter. It remains Day 1.
 

It's his annual mic drop. Shareholders must feel so secure with their Amazon shares. Bezos is basically saying he figured out some enduring principles when he started his company, and they're so universal and stable that he doesn't have much to add some twenty years later except to point people back at his first letter to shareholders.

Other CEO's and leaders I've encountered are gifted at this as well ("Lean in" "Yes we can" "Move fast and break things" "Innovation is saying no to a thousand things" "Just do it" "I have a dream") but I gravitate to those from Jeff because I saw them arise from distinct needs in the moment, and not just for notoriety's sake. As such, it's a strategy applicable to more than just philosophers and CEO's. [Sometime I'll write about some of the communication strategies of Steve Jobs, many of which can be gleaned from his public keynotes. He was an extremely skilled and canny communicator, and in many ways an underrated one.]

Tyler Cowen named his latest book The Complacent Class. It's a really thought-provoking read, but the alliteration in the title helps. Now economists everywhere are referring to a broad set of phenomena by the term "complacent class." It wouldn't be nearly as memorable if called Complacent People or The Dangers of Self-Satisfaction. Can you name the subtitle of the book? It's "the self-defeating quest for the American Dream" but no one remembers that part.

Venkatesh Rao once wrote a memorable post about management principles encoded in the American version of the TV show The Office. Anyone familiar with the post probably remembers it by the first part of its title: "The Gervais Principle." Very few, I'd suspect, remember the rest of the title—"Or The Office according to The Office"—though it does employ a clever bit of word repetition.

Whatever you think of Hillary Clinton as compared to Donald Trump as Presidential candidates, I'd venture that more people can recite Trump's mantra—Make America Great Again—than Clinton's. I don't know if she had a slogan, or if she did I don't remember what it was. Her most memorable turn of phrase from the campaign trail was probably "then deal me in" at the end of a much longer phase, “if fighting for women's healthcare and paid family leave and equal pay is playing the woman card, then deal me in." It's difficult to think of a phrase more emblematic of her problems in articulating what she stood for. The first part of the sentence is long and wonky, and I couldn't recall it from memory, and she never followed up on the second enough.

If she'd used it repeatedly in a speech, it could have been a form of epistrophe like Obama's "Yes we can" or Martin Luther King's "I have a dream." Imagine if she had an entire speech where she kept hammering on what other cards she wanted to deal. "If ensuring that everyone in the country has an equal opportunity to reasonable healthcare is playing the [?] card then deal me in. If ensuring everyone in this country has the right to a good education is playing the [?] card then deal me in." And so on. But she would only use it once in a while, or once in a speech, whereas Obama had entire speeches where he would circle back to "Yes we can" again and again. [Maybe there isn't an equivalent to "woman card" that makes this epistrophe scalable but the broad point about her weak use of rhetoric holds.]

That's not to say "Make American Great Again" is some slogan for the ages, but it is succinct and has a loose bit of trochaic meter (MAKE ah-MERIC-uh GREAT a-GAIN) which grants it a sense of emphatic energy which all political movements need. His supporters compressed it into #MAGA which became a more liquid shorthand for social media. In general it seems the populist backlash and the alt-right are stronger at such rhetorical tricks than the Democrats or the left, but perhaps it is bred of necessity from being the opposition party?

Rhetoric can get a bad name because some lump it in with other language tricks like those used in clickbait titles. "You won't believe what happened next" or "This will restore your faith in humanity" or "ten signs you're a Samantha." Those aren't ways for making something stick, those are ways for making someone click. [Quiz: what rhetorical techniques were used in that last sentence?] Rhetoric isn't inherently good or bad; it can be used for ideas both inspiring and appalling.

There will come a day when you'll come up with some brilliant theory or concept and want it to spread and stick. You want to lay claim to that idea. It's then that you'll want to set aside some time to state it distinctively, even if you're not a gifted rhetorician. A memorable turn of phrase need not incorporate sophisticated techniques like parataxis or polysyndeton. Most everyone in tech is familiar with Marc Andreessen's "software is eating the world" and Stewart Brand's "information wants to be free." Often mere novelty is enough to elevate the mundane. You've spent all that time cooking your idea, why not spend an extra few moments plating it? It all tastes the same in your mouth but one dish will live on forever in an Instagram humblebrag pic.

If you're stuck and need some help, I highly recommend the delightful book The Elements of Eloquence: Secrets of the Perfect Turn of Phrase, whose title I remembered as simply Eloquence, which might, come to think of it, be the more memorable title.