Invisible asymptotes

"It is said that if you know your enemies and know yourself, you will not be imperiled in a hundred battles; if you do not know your enemies but do know yourself, you will win one and lose one; if you do not know your enemies nor yourself, you will be imperiled in every single battle." - Sun Tzu

My first job at Amazon was as the first analyst in strategic planning, the forward-looking counterpart to accounting, which records what already happened. We maintained several time horizons for our forward forecasts, from granular monthly forecasts to quarterly and annual forecasts to even five and ten year forecasts for the purposes of fund-raising and, well, strategic planning.

One of the most difficult things to forecast was our adoption rate. We were a public company, though, and while Jeff would say, publicly, that "in the short run, the stock market is a voting machine, in the long run, it's a scale," that doesn't provide any air cover for strategic planning. It's your job to know what's going to happen in the future as best as possible, and every CFO of a public company will tell you that they take the forward guidance portion of their job seriously. Because of information asymmetry, analysts who cover your company depend quite a bit on guidance on quarterly earnings calls to shape their forecasts and coverage for their clients. It's not just that giving the wrong guidance might lead to a correction in your stock price but that it might indicate that you really have no idea where your business is headed, a far more damaging long-run reveal.

It didn't take long for me to see that our visibility out a few months, quarters, and even a year was really accurate (and precise!). What was more of a puzzle, though, was the long-term outlook. Every successful business goes through the famous S-curve, and most companies, and their investors, spend a lot of time looking for that inflection point towards hockey-stick growth. But just as important, and perhaps less well studied, is that unhappy point later in the S-curve, when you hit a shoulder and experience a flattening of growth.

One of the huge advantages for us at Amazon was that we always had a fairly good proxy for our total addressable market (TAM). It was easy to pull the statistics for the size of the global book market. Just as a rule of thumb, one could say that if we took 10% of the global book market it would mean our annual revenues would be X. One could be really optimistic and say that we might even expand the TAM, but finance tends to be the conservative group in the company by nature (only the paranoid survive and all that).

When I joined Amazon I was thrown almost immediately into working with a bunch of MBA's on business plans for music, video, packaged software, magazines, and international. I came to think of our long-term TAM as a straightforward layer cake of different retail markets.

Still, the gradient of adoption was somewhat of a mystery. I could, in my model, understand that one side of it was just exposure. That is, we could not obtain customers until they'd heard of us, and I could segment all of those paths of exposure into fairly reliable buckets: referrals from affiliate sites (we called them Associates), referrals from portals (AOL, Excite, Yahoo, etc.), and word-of-mouth (this was pre-social networking but post-email so the velocity of word-of-mouth was slower than it is today). Awareness is also readily trackable through any number of well-tested market research methodologies.

Still, for every customer who heard of Amazon, how could I forecast whether they'd make a purchase or not? Why would some people use the service while others decided to pass?

For so many startups and even larger tech incumbents, the point at which they hit the shoulder in the S-curve is a mystery, and I suspect the failure to see it occurs much earlier. The good thing is that identifying the enemy sooner allows you to address it. We focus so much on product-market fit, but once companies have achieved some semblance of it, most should spend much more time on the problem of product-market unfit.

For me, in strategic planning, the question in building my forecast was to flush out what I call the invisible asymptote: a ceiling that our growth curve would bump its head against if we continued down our current path. It's an important concept to understand for many people in a company, whether a CEO, a product person, or, as I was back then, a planner in finance.

Amazon's invisible asymptote

Fortunately for Amazon, and perhaps critical to much of its growth over the years, perhaps the single most important asymptote was one we identified very early on. Where our growth would flatten if we did not change our path was, in large part, due to this single factor.

We had two ways we were able to flush out this enemy. For people who did shop with us, we had, for some time, a pop-up survey that would appear right after you'd placed your order, at the end of the shopping cart process. It was a single question, asking why you didn't purchase more often from Amazon. For people who'd never shopped with Amazon, we had a third party firm conduct a market research survey where we'd ask those people why they did not shop from Amazon.

Both converged, without any ambiguity, on one factor. You don't even need to rewind to that time to remember what that factor is because I suspect it's the same asymptote governing e-commerce and many other related businesses today.

Shipping fees.

People hate paying for shipping. They despise it. It may sound banal, even self-evident, but understanding that was, I'm convinced, so critical to much of how we unlocked growth at Amazon over the years.

People don't just hate paying for shipping, they hate it to literally an irrational degree. We know this because our first attempt to address this was to show, in the shopping cart and checkout process, that even after paying shipping, customers were saving money over driving to their local bookstore to buy a book because, at the time, most Amazon customers did not have to pay sales tax. That wasn't even factoring in the cost of getting to the store, the depreciation costs on the car, and the value of their time.

People didn't care about this rational math. People, in general, are terrible at valuing their time, perhaps because for most people monetary compensation for one's time is so detached from the event of spending one's time. Most time we spend isn't like deliberate practice, with immediate feedback.

Wealthy people tend to receive a much more direct and immediate payoff for their time which is why they tend to be better about valuing it. This is why the first thing that most ultra-wealthy people I know do upon becoming ultra-wealthy is to hire a driver and start to fly private. For most normal people, the opportunity cost of their time is far more difficult to ascertain moment to moment.

You can't imagine what a relief it is to have a single overarching obstacle to focus on as a product person. It's the same for anyone trying to solve a problem. Half the comfort of diets that promise huge weight loss in exchange for cutting out sugar or carbs or whatever is feeling like there's a really simple solution or answer to a hitherto intractable, multi-dimensional problem.

Solving people's distaste for paying shipping fees became a multi-year effort at Amazon. Our next crack at this was Super Saver Shipping: if you placed an order of $25 or more of qualified items, which included mostly products in stock at Amazon, you'd receive free standard shipping.

The problem with this program, of course, was that it caused customers to reduce their order frequency, waiting until their orders qualified for the free shipping. In select cases, forcing customers to minimize consumption of your product-service is the right long-term strategy, but this wasn't one of those.

That brings us to Amazon Prime. This is a good time to point out that shipping physical goods isn't free. Again, self-evident, but it meant that modeling Amazon Prime could lead to widely diverging financial outcomes depending on what you thought it would do to the demand curve and average order composition.

To his credit, Jeff decided to forego testing and just go for it. It's not so uncommon in technology to focus on growth to the exclusion of all other things and then solve for monetization in the long run, but it's easier to do so for a social network than a retail business with real unit economics. The more you sell, the more you lose is not and has never been a sustainable business model (people confuse this for Amazon's business model all the time, and still do, which ¯\_(ツ)_/¯).

The rest, of course, is history. Or at least near-term history. It turns out that you can have people pre-pay for shipping through a program like Prime and they're incredibly happy to make the trade. And yes, on some orders, and for some customers, the financial trade may be a lossy one for the business, but on net, the dramatic shift in the demand curve is stunning and game-changing.

And, as Jeff always noted, you can make micro-adjustments in the long run to tweak the profit leaks. For some really large, heavy items, you can tack on shipping surcharges or just remove them from qualifying for Prime. These days, some items in Amazon are marked as "Add-on items" and you can only order them in conjunction with enough other items such that they can be shipped with those items rather than in isolation.

[Jeff counseled the same "fix it later" strategy in the early days when we didn't have good returns tracking. For a window of time in the early days of Amazon, if you shipped us a box of books for returns, we couldn't easily tell if you'd purchase them at Amazon and so we'd credit you for them, no questions asked. One woman took advantage of this loophole and shipped us boxes and boxes of books. Given our limited software resources, Jeff said to just ignore the lady and build a way to solve for that later. It was really painful, though, so eventually customer service representatives all shared, amongst themselves, the woman's name so they could look out for it in return requests even before such systems were built. Like a mugshot pinned to every monitor saying "Beware this customer." A tip of the hat to you, maam, wherever you are, for your enterprising spirit in exploiting that loophole!]

Prime is a type of scale moat for Amazon because it isn't easy for other retailers to match from a sheer economic and logistical standpoint. As noted before, shipping isn't actually free when you have to deliver physical goods. The really challenging unit economics of delivery businesses like Postmates, when paired with people's aversion for paying for shipping, makes for tough sledding, at least until the cost of delivering such goods can be lowered drastically, perhaps by self-driving cars or drones or some such technology shift.

Furthermore, very few customers shop enough with retailers other than Amazon to make a pre-pay program like Prime worthwhile to them. Even if they did, it's very likely Amazon's economies of scale in shipping and deep knowledge of how to distribute their inventory optimally means their unit economics on delivery are likely superior.

The net of it is that long before Amazon hit what would've been an invisible asymptote on its e-commerce growth it had already erased it.

Know thine enemy.

Invisible asymptotes are...invisible

An obvious problem for many companies, however, is that they are creating new types of businesses and services that don't lend themselves to easily identifying such invisible asymptotes. Many are not like Amazon where there are readily tracked metrics like the size of the global book market with which to peg their TAM.

Take social networks, for example. What's the shoulder of the curve for something like Facebook? Twitter? Instagram? Snapchat?

Some of the limits to their growth are easier to spot than others. For messaging and some more general social networking apps, for example, in many cases network effects are geographical. Since these apps build on top of real-world social graphs, and many of those are geographically clustered, there are winner-take-all dynamics such that in many countries one messaging app dominates, like Kakao in Korea or Line in Taiwan. There can be geo-political considerations, too, that help ensure that that WeChat will dominate in China to the exclusion of all competitors, for example.

For others, though, it takes a bit more product insight, and some might say intuition, to see the ceiling before you bump into it. For both employees and investors, understanding product-market unfit follows very closely on identifying product-market fit as an existential challenge.

Without direct access to internal metrics and research, it's difficult to use much other than public information and my own product intuition to analyze potential asymptotes for many companies, but let's take a quick survey of several more prominent companies and consider some of their critical asymptotes (these companies are large enough that they likely have many, but I'll focus on the macro). You can apply this to startups, too, but there are some differences between achieving initial product market fit and avoiding the shoulder in the S-curve after already having found it.

Twitter

Let's start with Twitter, for many in tech the most frustrating product from the perspective of the gap between the actual and the potential. Its user growth has been flat for quite some time, and so it can be said to have already run full speed into an invisible asymptote. In quarterly earnings calls, it's apparent management often have no idea if or when or how that might shift because their guidance is often a collective shrug.

One popular early school of thought on Twitter, a common pattern with most social networks, is that more users need to experience what the power users or early adopters are experiencing and they'll turn into active users. Many a story of social networks who've continued to grow point to certain keystone metrics as pivotal to unlocking product-market fit. For example, once you've friended 30 people on Facebook, you're hooked. For Twitter, an equivalent may be following enough people to generate an interesting feed.

Pattern-matching moves more quickly through Silicon Valley than almost any other place I've lived, so stories like that are passed around through employees and Board meetings and other places where the rich and famous tech elite hobnob, and so it's not surprising that this theory is raised for every social network that hits the shoulder in their S-curve.

There's more than a whiff of Geoffrey Moore's Crossing the Chasm in this idea, some sense that moving from early adopters to the mainstream involves convincing more users to use the same product/service as early adopters do.

In the case of Twitter, I think the theory is wrong. Given the current configuration of the product, I don't think any more meaningful user growth is possible, and tweaking the product as it is now won't unlock any more growth. The longer they don't acknowledge this, the longer they'll be stuck in a Red Queen loop of their own making.

Sometimes, the product-market fit with early adopters is only that. The product won't go mainstream because other people don't want or need that product. In these cases, the key to unlocking growth is usually customer segmentation, creating different products for different users.

Mistaking one type of business for the other can be a deadly mistake because the strategies for addressing them are so different. A common symptom of this mistake is not seeing the shoulder in the S-curve coming at all, not understanding the nature of your product-market unfit.

I believe the core experience of Twitter has reached most everyone in the world who likes it. Let's examine the core attributes of Twitter the product (which I treat as distinct from Twitter the service, the public messaging protocol).

It is heavily text-based, with 140 and now 280 character limit snippets of text from people you've followed presented in a vertical scrolling feed in some algorithmic order, which, for the purposes of this exercise, I'll just consider roughly chronological.

For fans, most of whom are infovores, the nature of product-market fit is, as with many of our tech products today, one of addiction. Because the chunks of text are short, if one tweet is of no interest, you can quickly scan and scroll to another with little effort. Discovering tweets of interest in what appears to be a largely random order rewards the user with dopamine hits on that time-tested Skinner box variable frequency. Instead of rats hitting levers for pellets of food, power Twitter user push or pull on their phone screens for the next tasty pellet of text.

For infovores, text, in contrast to photos or videos or music, is the medium of choice from a velocity standpoint. There is deep satisfaction in quickly decoding the textual information, the scan rate is self-governed on the part of the reader, unlike other mediums which unfold at their own pace (this is especially the case with video, which infovores hate for its low scannability).

Over time, this loop tightens and accelerates through the interaction of all the users on Twitter. Likes and retweets and other forms of feedback guide people composing tweets to create more of the type that receive positive feedback. The ideal tweet (which I mean one that will receive maximum positive feedback) combines some number of the following attributes:

  • Is pithy. Sounds like a fortune cookie. The character limit encourages this type of compression.

  • Is slightly surprising. This can be a contrarian idea or just a cliche encoded in a semi-novel way.

  • Rewards some set of readers' priors, injecting a pleasing dose of confirmation bias directly into the bloodstream.

  • Blasts someone that some set of people dislike intensely. This is closely related to the previous point.

  • Is composed by someone famous, preferably someone a lot of people like but don't consider to be a full-time Tweeter, like Chrissy Teigen or Kanye West.

  • Is on a topic that most people think they understand or on which they have an opinion.

Of course, the set of ideal qualities varies by subgroup on Twitter. Black Twitter differs from rationalist Twitter which differs from NBA Twitter. The meta point is that the flywheel spins more and more quickly over time within each group.

The problem is that for those who don't use Twitter, almost all of its ideal attributes among the early adopter cohort are those which other people find bewildering and unattractive. Many people find the text-heavy nature of Twitter to be a turn-off. The majority of people, actually.

The naturally random sort order of ideas that comes from the structure of Twitter, one which pings the pleasure centers of the current heavy user cohort when they find an interesting tweet, is utterly perplexing to those who don't get the service. Why should they hunt and peck for something of interest? Why are conversations so difficult to follow (actually, this is a challenge even for those who enjoy Twitter)? Why do people have to work so hard to parse the context of tweets?

Falling into the trap of thinking other users will be like you is especially pernicious because the people building the product are usually among that early adopter cohort. The easiest north star for a product person is their own intuition. But if they're working on a product that requires customer segmentation, being in the early adopter cohort means one's instincts will keep guiding you towards the wrong North star and the company will just keep bumping into the invisible asymptote without any idea why.

This points to an important qualifier to the "crossing the chasm" idea of technology diffusion. If the chasm is large enough, the same product can't cross it. Instead, on the other side of the gaping chasm is just a different country altogether, with different constituents with different needs.

I use Twitter a lot (I recently received a notification I'd passed my 11-year anniversary of joining the service) but almost everyone in my family, from my parents to my siblings to my girlfriend to my nieces and nephews has tried and given up on Twitter. It doesn't fulfill any deep-seated need for any of them.

It's not surprising to me that Twitter is populated heavily by journalists and a certain cohort of techies and intellectuals who all, to me, are part of a broader species of infovore. For them, opening Twitter must feel as if they've donned Cerebro and have global contact with thousands of brains all over the world, as if the fabric of their brain had been flattened and stretched out wide and laid on top of that of millions of others brains all over the world.

Quiet, I am reading the tweets.

Mastering Twitter is already something this group of people do all the time in their lives and jobs, only Twitter accelerates it, like a bicycle for intellectual conversation and preening. Twitter, at its best, can provide a feeling of near real-time communal knowledge sharing that satisfies some of the same needs as something like SoulCycle or Peloton. A feeling of communion that also feels like it's productive.

If my instincts are right, then all the iterating around the margins on Twitter won't do much of anything to change the growth curve of the service. It might improve the experience for the current cohort of users and increase usage (for example, curbing abuse and trolls is an immediate and obvious win for those who experience all sorts of terrible harassment on the service), but it doesn't change the fact that this core Twitter product isn't for all the people who left the club long ago, soon after they walked in and realized it was just a bunch of nerds who'd ordered La Croix bottle service and were sitting around talking about Bitcoin and stoicism and transcendental meditation.

The good news is that the Twitter service, that public messaging protocol with a one-way follow model, could be the basis for lots of products that might appeal to other people in the world. Knowing the enemy can prevent wasting time chasing the wrong strategy.

Unfortunately, one of the main paths towards coming up with new products built on top of that protocol was the third party developer program, and, well, Twitter has treated its third party developers like unwanted stepchildren for a long time. For whatever reason, it's difficult to speculate without having been there, Twitter's rate of product development internally has been glacial. A vibrant third party-developer program could have helped by massively increasing the vectors of development on Twitter's very elegant public messaging protocol and datasets.

[Note, however, that I'm sympathetic to tech companies that restrict building clones of their service using their API's. No company owes it to others to allow people to build direct competitors to their own product. Most people don't remember, but Amazon's first web services offering was for affiliates to build sites to sell things. Some sites started building massive Amazon clones and so Amazon's web services evolved into other forms, eventually settling on what most people know it as today.]

In addition, I've long wondered if the shutting out of third party developers on Twitter was an attempt to aggregate and own all their own ad inventory. Both these problems could've been solved by tweaking the Twitter third party development program. Developers could be offered two paths.

One option is that for every X number of tweets a developer pulled, they'd have to carry and display a Twitter-inserted ad unit. This would make it possible for Twitter to support third-party clients like Tweetbot that compete somewhat with Twitter's own clients. Maybe one of these developers would come up with improvements on top of Twitter's own client apps, but in doing so they'd increase Twitter's ad inventory.

The second option would be to pay some fixed fee for every X tweets pulled. That would force the developer to come up with some monetization scheme on their own to cover their usage, but at least the option would exist. I don't doubt that some enterprising developers might come up with some way to monetize a particular use case, for example for business research.

Twitter the product/app has hit its invisible asymptote. Twitter the protocol still has untapped potential.

Snapchat

Snapchat is another example of a company that's hit a shoulder in its growth curve. Unlike Twitter, though, I suspect its invisible asymptote is less an issue of its feature set and more one of a generational divide.

That's not to say that making the interface less inscrutable earlier on wouldn't have helped a bit, but I suspect only at the margins. In fact, the opaque nature of the interface probably served Snapchat incredibly well when the product came along, regardless of whether or not it was intended that way. Snapchat came along at a moment when kids' parents were joining Facebook, and when Facebook had been around long enough for the paper trail of its early, younger users to come back and bite some of them.

Along comes a service that not only wipes out content by default after a short period of time but is inscrutable to the very parents who might crash the party. In fact, there's an entire class of products for which I believe an Easter Egg-like interface is actually preferable to an elegant, self-describing interface, long seen as the apex of UI design (more on that another day).

I've written before about selfies as a second language. At the root of that phenomenon is the idea that a generation of kids raised with smartphones with a camera front and back have found the most efficient way to communicate is with the camera, not the keyboard. That's not the case for an older cohort of users who almost never send selfies as a first resort. The very default of Snapchat to the camera screen is such a bold choice that it will probably never be the messaging app of choice for old folks, no matter how Snapchat moves around and re-arranges its other panes.

More than that, I suspect every generation needs spaces of its own, places to try on and leave behind identities at low cost and on short, finite time horizons. That applies to social virtual spaces as much as it does to physical spaces.

Look at how old people use Snapchat and you'll see lots of use of Stories. Watch a young person use Snapchat and it's predominantly one-to-one messaging using the camera (yes, I know some of the messages I receive on Snap are the same ones that person is sending to everyone one-to-one, but the hidden nature of that behavior allows me to indulge an egocentric rather than Copernican model of the universe). Now, it's possible for one app to serve multiple audiences that way, but it will either have to compromise all or some of its user experience to do so.

At a deeper level, I think a person's need for ephemeral content varies across one's lifetime. It's of much higher value when one is young, especially in formative years. As one ages, and time's counter starts to run low, one turns nostalgic, and the value of permanent content, especially from long bygone days, increases, serving as beautifully aged relics of another era. One also tends to be more adept at managing one's public image the more time passes, lessening the need for ephemerality.

All this is to say that I don't think making the interface of Snapchat easier to use is going to move it off of the shoulder on its S-curve. That's addressing a different underlying cause than the one that lies behind its invisible asymptote.

The good news for Snapchat is that I don't think Facebook is going to be able to attract the youngsters. I don't care if Facebook copies Snapchat's exact app one for one, it's not going to happen. The bad news for Snapchat is that it probably isn't going to attract the oldies either. The most interesting question is whether Snapchat's cohort stays with it for life, and the next interesting question is who attracts the next generation of kids to get their first smartphones. Will they, like every generation of youth before them, demand a social network of their own? Sometimes I think they will just to claim a namespace that isn't already spoken for. Who wants to be joesmith43213 when you can be joesmith on some new sexy social network?

As a competitor, however, Instagram is more worrisome than Facebook. It came along after Facebook, as Snapchat did, and so it had the opportunity to be a social network that a younger generation could roam as pioneers, mining so much social capital yet to be claimed. It is also largely an audio-visual network which is appealing to a more visually literate generation.

When Messenger incorporated Stories into its app, it felt like a middle-aged couple dressing in cowboy chic and attending Coachella. When Instagram cribbed Stories, though, it addressed a real supply-side content creation issue for the same young'uns who used Snapchat. That is, people were being too precious about what they shared on Instagram, decreasing usage frequency. By adding Stories, they created a mechanism that wouldn't force content into the feed and whose ephemerality encouraged more liberal capture and sharing without the associated guilt.

This is a general pattern among social networks and products in general: to broaden their appeal they tend to broaden their use cases. It's rare to see a product adhere strictly to its early specificity and still avoid hitting a shoulder in their adoption S-curve. Look at Facebook today compared to Facebook in its early days. Look at Amazon's product selection now compared to when it first launched.

It takes internal fortitude for a product team to make such concessions (I would say courage but we need to sprinkle that term around less liberally in tech). The stronger the initial product market fit, the more vociferously your early adopters will protest when you make any changes. Like a band that is accused of selling out, there is an inevitable sense that a certain sharpness of flavor, of choice, has seeped out as more and more people join up and as a service loosens up and accommodates more more use cases.

I remember seeing so many normally level-headed people on Twitter threaten to abandon the service when they announced they were increasing the character limit from 140 to 280. The irony, of course, was that the character-limit increase likely improved the service for its current users while doing nothing to attract people who didn't use the service, even though the move was addressed mostly to heathen.

Back to Snapchat. I wrote a long time ago that the power of social networks lies in their graph. That means many things, and in Snapchat's case it holds a particularly fiendish double bind. That Snapchat is the social network claimed by the young is both a blessing and a curse. Were a bunch of old folks to suddenly flock to Snapchat, it might induce a case of Groucho Marx's, "I don't care to belong to a club that accepts people like me as members."

Facebook

On the dimension of utility, Facebook's network effects continue to be pure and unbounded. The more people that are on Facebook, the more it's useful for certain things for which a global directory is useful. Even though many folks don't use Facebook a lot, it's rare I can't find them on Messenger if I don't have their email address or phone number. The complexity of analyzing Facebook is that it serves different needs in different countries and markets, social network having strong path dependency in their usage patterns. In many countries, Facebook is the internet; it's odd as an American to travel to countries where businesses' only presence online is a Facebook page, so accustomed I am to searching for American businesses on the web or Yelp first.

When it comes to the "social" aspect of social networking, the picture is less clear-cut. Here I'll focus on the U.S. market since it's the one I'm most familiar with. Because Facebook is the largest social network in history, it may be encountering scaling challenges few other entities have ever seen.

The power of a social network lies in its graph, and that is a conundrum in many ways. One is that a massive graph is a blessing until it's a curse. For social creatures like humans who've long lived  in smaller networks and tribes, a graph that conflates everyone you know is intimidating to broadcast to, except for those who have no compulsion about performing no matter the audience size: famous people, marketers, and those monstrous people who share everything about their lives. You know who you are.

This is one of the diseconomies of scale for social networks that Facebook is first to run into because of its unprecedented size. Imagine you're in a room with all your family, friends, coworkers, casual acquaintances, and a lot of people you met just once but felt guilty about rejecting a friend request from. It's hundreds, maybe even thousands of people. What would you say to them? We know people maintain multiple identities for different audiences in their lives. Very few of us have to cultivate an identity for that entire blob of everyone we know. It's a situation one might encounter in the real world only a few times in life, perhaps at one's wedding, and later one's funeral. Online, though? It happens to be the default mode on Facebook's News Feed.

It's no coincidence that public figures, those who have the most practice at having to deal with this problem, are so guarded. As your audience grows larger, the chance that you'll offend someone deeply with something you say approaches 1.

When I scan my Facebook feed, I see fewer and fewer people I know sharing anything at all. Map one's sharing frequency with the size of one's friend list on Facebook and I highly suspect it looks like this:

friend-post-frequency-facebook.png

Again, not everyone is like this, some psychopaths who are comfortable sharing their thoughts no matter the size of the audience, but these people are often annoying, the type who dive right into politics at Thanksgiving before you've even spooned gravy over your turkey. This leads to a form of adverse selection where a few over-sharers take over your News Feed.

[Not everything one shares gets distributed to one's entire friend graph given the algorithmic feed. But you as the one sharing something have no idea who will see it so you have to assume that any and every person in your graph will see it. The chilling effect is the same.]

Another form of diseconomy of scale is behind the flight to Snapchat among the young, as outlined earlier. A sure way to empty a club or a dance floor is to have the parents show up; few things are more traumatic then seeing your Dad pretend-grind on your Mom when "Yeah" by Usher comes on. Having your parents in your graph on Facebook means you have to assume they're listening, and there isn't some way to turn on the radio very loudly or run the water as in a spy movie when you're trying to pass secrets to someone in a room that's bugged. The best you can do is communicate in code to which your parents don't own the decryption key; usually this takes the form of memes. Or you take the communication over to Snapchat.

Another diseconomy of scale is the increasing returns to trolling. Facebook is more immune to this thanks to its bi-directional friending model than, say, Twitter, with its one-way follow model and public messaging framework. On Facebook, those wishing to sow dissension need to be a bit more devious, and as revelations from the last election showed, there are means to a person's heart, to reach them directly or indirectly, through confirmation bias and flattery. The Iago playbook from Othello. On Twitter, there's no need for such scheming, you can just nuke people from your keyboard without their consent.

All of this is to say I suspect many of Facebook's more fruitful vectors for rekindling their value for socializing lie in breaking up the surface area of their service. News Feed is so monolithic a surface as to be subject to all the diseconomies of scale of social networking, even as it makes it such an attractive advertising landscape.

The most obvious path to this is Groups, which can subdivide large graphs into ones more unified in purpose or ideology. Google+ was onto something with Circles, but since they hadn't actually achieved any scale they were solving a problem they didn't have yet.

Instagram

Where is Instagram's invisible asymptote? This is one of the trickier ones to contemplate as it continues to grow without any obvious end in sight.

One of the advantages to Instagram is that it came about when Facebook was broadening its acceptable media types from text to photos and video. Instagram began with just square photos with a simple caption, no links allowed, no resharing.

This had a couple of advantages. One is that it's harder to troll or be insufferable in photos than it is in text. Photos tend to soften the edge of boasts and provocations. More people are more skilled at being hurtful in text than photos. Instagram has tended to be more aggressive than other networks at policing the emotional tenor of its network, especially in contrast to, say Twitter, turning its attention most recently to addressing trolls in the comment sections.

Of course photos are not immune to this phenomenon. The "look at my perfect life" boasting of Instagram is many people's chief complaint about the app and likely the primary driver of people feeling lousy after looking through their feed there. Still, outright antagonism with Instagram, given it isn't an open public graph like Twitter, is harder. The one direct vector is comments and Instagram is working on that issue.

In being a pure audio-visual network at a time when Facebook and most other networks were mixed-media, Instagram siphoned off many people for whom the best part of Facebook was just the photos and videos; again, we often, as with Twitter, over-estimate the product-market fit and TAM of text. If Facebook just showed photos and videos for a week I suspect their usage would grow, but since they own Instagram...

As with other social networks that grow, Instagram broadened its formats early on to head off several format-based asymptotes. Non-square photos and videos with gradually lengthening time limits have broadened the use cases and, more importantly, removed some level of production friction.

The move to copy Snapchat's Stories format was the next giant asymptote avoided. The precious nature of sharing on Instagram was a drag on posting frequency. Stories solves the supply-side issue for content several ways. One is that since it requires you to explicitly tap into viewing it from the home feed screen, it shifts the onus for viewing the content entirely to the audience. This frees the content creator from much of the guilt of polluting someone else's feed. The expiring nature of the content further removes another of a publisher's inhibitions about littering the digital landscape. It unlocked so much content that I now regularly fail to make it through more than a tiny fraction of Stories on Instagram. Even friends who don't publish a lot now often put their content in Stories rather than posting to the main feed.

The very format of Stories, with its full-screen vertical orientation, cues the user that this format is meant for the native way we hold our devices as smartphone photographers, rather than accommodating the more natural landscape way that audiences view the world, with eyes side-by-side in one's head. Stories includes accoutrements like gaudy stickers and text overlays and face filters that aren't in the toolset for Instagram's main feed photo/video composer, perhaps to preserve some aesthetic separation between the main feed and Stories.

There is a purity about Instagram which makes even its ads perfectly native: everything on the service is an audio-visual advertisement. I see people complain about the ad load in Instagram, but if you really look at your feed, it's always had an ad load of 100%.

I just opened my feed and looked through the first twenty posts, and I'd classify them all as ads: about how great my meal was, for beautiful travel destinations, for the exquisite craft of various photographers and cinematographers, for an actor's upcoming film, for Rihanna's new lingerie line or makeup drop, for an elaborate dish a friend cooked, for a current concert tour, for how funny someone is, for someone's gorgeous new headshot, and for a few sporting events and teams. And yes, a few of them were official Instagram ads.

I don't mean this in a negative way. One might lobby this accusation at all social networks, but the visual nature of Instagram absorbs the signaling function of social media in the most elegant and unified way. For example, messaging apps consist of a lot of communication that isn't advertising. But that's exactly why a messaging app like Messenger isn't as lucrative an ad platform as Instagram is and will be. If ads weren't marked explicitly, and if they weren't so obviously from accounts I don't follow, it's not clear to me that they'd be so jarringly different in nature than all the other content in the feed.

The irony is that, as Facebook broadened its use cases and supported media types to continue to expand, the purity of Instagram may have made it more scalable a network in some ways.

Of course, every product or service has some natural ceiling. To take one example, messaging with other folks is still somewhat clunky on Instagram, it feels tacked on. Considering how much young people use Snapchat as a messaging app of choice, there's likely attractive headroom for Instagram here.

Rumors Instagram is contemplating a separate messaging app make sense. It would be ironic if Instagram separated out the more broadcast nature of its core app from the messaging use case in two different apps before Snapchat did. As noted earlier, it feels as if Snapchat is constantly fighting to balance the messaging parts of its app with the more broadcast elements like Stories and Discover, and separate apps might be one way to solve that more effectively.

As with all social networks which are mobile-phone dominant, there are limits to what can be optimized for in a single app, when all you have to work with is a single rectangular phone screen. The mobile phone revolution forced a focus in design which created billions of dollars in value, but Instagram, like all phone apps, will run into the asymptote that is the limits of how much you can jam into one app.

Instagram has already had some experience in dealing with this conundrum, creating separate apps like Boomerang or Hyperlapse that keep a lid on the complexity of the Instagram app itself and which bring additional composition techniques to the top level of one's phone. I often hear people counsel against launching separate apps because of the difficulty of getting adoption of even a single app, but that doesn't mean that separate apps aren't sometimes the most elegant way to deal with the spatial design constraints of mobile.

On Instagram, content is still largely short in nature so longer narratives aren't common or well-supported. The very structure, oriented around a main feed algorithmically compiling a variety of content from all the account you follow, isn't optimized towards a deep dive into a particular subject matter or narrative like, say, a television or a streaming video app. The closest to long-form on Instagram is Live, but most of what I see of that is only loosely narrative, resembling more an extended selfie than a considered narrative. Rather than pursue long-form narrative, it may be that a more on-brand way to tackle the challenge of lengthening usage of the app is better stringing together of existing content, similar to how Snapchat can aggregate content from one location into a feed of sorts. That can be useful for things like concerts and sporting events and breaking news events like natural disasters, protests, and marches.

In addition, perhaps there is a general limit to how far a single feed of random content arranged algorithmically can go before we suffer pure consumption exhaustion. Perhaps seeing curated snapshots from everyone will finally push us all to the breaking point of jealousy and FOMO and, across a large enough number of users, an asymptote will emerge.

However, I suspect we've moved into an age where the upper bound on vanity fatigue has shifted much higher in a way that an older generation might find unseemly. Just as we've moved into a post-scarcity age of information, I believe we've moved into a post-scarcity age of identity as well. And in this world, it's more acceptable to be yourself and leverage social media for maximal distribution of yourself in a way that ties to the fundamental way in which the topology of culture has shifted from a series of massive centralized hub and spokes to a more uniform mesh.

A last possible asymptote relates to my general sense that massive networks like Facebook and Instagram will, at some point, require more structured interactions and content units (for example, a list is a structured content unit, as is a check-in) to continue scaling. Doing so always imposes some additional friction on the content creator, but the benefit is breaking one monolithic feed into more distinct units, allowing users the ability to shift gears mentally by seeing and anticipating the structure, much like how a magazine is organized.

To fill gaps in a person's free time, an endless feed is like an endless jar of liquid, able to be poured into any crevice in one's schedule and flow of attention. To demand a person's time, on the other hand, is a higher order task, and more structured content seems to do better on that front. People set aside dedicated time to play games like Fortnite or to watch Netflix, but less so to browse feeds. The latter happens on the fly. But ambition in software-driven Silicon Valley is endless and so at some point every tech company tries to obtain the full complement of Infinity Stones, whether by building them or buying them, like Facebook did with Instagram and Whatsapp.

Amazon's next invisible asymptote?

I started with Amazon, but it is worth revisiting as it is hardly done with its own ambitions. After having made such massive progress on the shipping fee asymptote, what other barriers to growth might remain?

On that same topic of shipping, the next natural barrier is shipping speed. Yes, it's great that I don't have to pay for shipping, but in time customer expectations inflate. Per Jeff's latest annual letter to shareholders:

One thing I love about customers is that they are divinely discontent. Their expectations are never static – they go up. It’s human nature. We didn’t ascend from our hunter-gatherer days by being satisfied. People have a voracious appetite for a better way, and yesterday’s ‘wow’ quickly becomes today’s ‘ordinary’. I see that cycle of improvement happening at a faster rate than ever before. It may be because customers have such easy access to more information than ever before – in only a few seconds and with a couple taps on their phones, customers can read reviews, compare prices from multiple retailers, see whether something’s in stock, find out how fast it will ship or be available for pick-up, and more. These examples are from retail, but I sense that the same customer empowerment phenomenon is happening broadly across everything we do at Amazon and most other industries as well. You cannot rest on your laurels in this world. Customers won’t have it.

Why only two-day shipping for free? What if I want my package tomorrow, or today, or right now?

Amazon has already been working on this problem for over a decade, building out a higher density network of smaller distribution centers over its previous strategy of fewer, gargantuan distribution hubs. Drone delivery may have sounded like a joke when first announced on an episode of 60 Minutes, but it addresses the same problem, as does a strategy like Amazon lockers in local retail stores.

Another asymptote may be that while Amazon is great at being the site of first resort to fulfill customer demands for products, it is less capable when it comes to generating desire ex nihilo, the kind of persuasion typically associated more with a tech company like Apple or any number of luxury retailers.

At Amazon we referred to the dominant style of shopping on the service as spear-fishing. People come in, type a search for the thing they want, and 1-click it. In contrast, if you've ever gone to a mall with someone who loves shopping for shopping's sake, a clotheshorse for example, you'll see a method of shopping more akin to the gathering half of hunting and gathering. Many outfits are picked off the rack and gazed at, held up against oneself in a mirror, turned around and around in the hand for contemplation. Hands brush across racks of clothing, fingers feeling fabric in search of something unknown even to the shopper.

This is browsing, and Amazon's interface has only solved some aspects of this mode of shopping. If you have some idea what you want, similarities carousels can guide one in some comparison shopping, and customer reviews serve as a voice on the shoulder, but it still feels somewhat utilitarian.

Amazon's first attempts at physical stores reflect this bias in its retail style. I visited an Amazon physical bookstore in University Village the last time I was in Seattle, and it struck me as the website turned into 3-dimensional space, just with a lot less inventory. Amazon Go sounds more interesting, and I can't wait to try it out, but again, its primary selling point is the self-serve, low-friction aspect of the experience.

When I think of creating desire, I think of my last and only visit to Milan, when a woman at an Italian luxury brand store talked me into buying a sportcoat I had no idea I wanted when I walked into the store. In fact, it wasn't even on display, so minimal was the inventory when I walked in.

She looked at me, asked me some questions, then went to the back and walked back out with a single option. She talked me into trying it on, then flattered me with how it made me look, as well as pointing out some of its most distinctive qualities. Slowly, I began to nod in agreement, and eventually I knew I had to be the man this sportcoat would turn me into when it sat on my shoulders.

This challenge isn't unique to Amazon. Tech companies in general have been mining the scalable ROI of machine learning and algorithms for many years now. More data, better recommendations, better matching of customer to goods, or so the story goes. But what I appreciate about luxury retail, or even Hollywood, is its skill for making you believe that something is the right thing for you, absent previous data. Seduction is a gift, and most people in technology vastly overestimate how much of customer happiness is solvable by data-driven algorithms while underestimating the ROI of seduction.

Netflix spent $1 million on a prize to improve its recommendation algorithms, and yet it's a daily ritual that millions of people stare at their Netflix home screen, scrolling around for a long time, trying to decide what to watch. It's not just Netflix, open any streaming app. The AppleTV, a media viewing device, is most often praised for its screensaver! That's like admitting you couldn't find anything to eat on a restaurant menu but the typeface was pleasing to the eye. It's not that data can't guide a user towards the right general neighborhood, but more than one tech company will find the gradient of return on good old seduction to be much steeper than they might realize.

Still, for Amazon, this may not be as dangerous a weakness as it would be for another retailer. Much of what Amazon sells is commodities, and desire generation can be offloaded to other channels who then see customers leak to Amazon for fulfillment. Amazon's logistical and customer service supremacy is a devastatingly powerful advantage because it directly precedes and follows the act of payment in the shopping value chain, allowing it to capture almost all the financial return of commodity retail.

And, as Jeff's annual letter to shareholders has emphasized from the very first instance, Amazon's mission is to be the world's most customer-centric company. One way to continue to find vectors for growth is to stay attached at the hip to the fickle nature of customer unhappiness, which they're always quite happy to share under the right circumstances, one happy consequence of this age of outrage. There is such a thing as a price umbrella, but there's also one for customer happiness.

How to identify your invisible asymptotes

One way to identify your invisible asymptotes is to simply ask your customers. As I noted at the start of this piece, at Amazon we honed in on how shipping fees were a brake on our business by simply asking customers and non-customers.

Here's where the oft-cited quote from Henry Ford is brought up as an objection: “If I had asked people what they wanted, they would have said faster horses," he is reputed to have said. Like most truisms in business, it is snappy and lossy all at once.

True, it's often difficult for customers to articulate what they want. But what's missed is that they're often much better at pinpointing what they don't want or like. What you should hear when customers say they want a faster horse is not the literal but instead that they find travel by horse to be too slow. The savvy product person can generalize that to the broader need of traveling more quickly, and that problem can be solved any number of ways that don't involve cloning Secretariat or shooting their current horse up with steroids.

This isn't a foolproof strategy. Sometimes customers lie about what they don't like, and sometimes they can't even articulate their discontent with any clarity, but if you match their feedback with good analysis of customer behavior data and even some well-designed tests, you can usually land on a more accurate picture of the actual problem to solve.

A popular sentiment in Silicon Valley is that B2C businesses are more difficult product challenges than B2B because products and services for the business customer can be specified merely by talking to the customer while the consumer market is inarticulate about its needs, per the Henry Ford quote. Again, that's only partially true, and so many consumer companies I've been advising recently haven't pushed enough yet on understanding or empathizing with the objections of its non-adopters.

We speak often of the economics concept of the demand curve, but in product there is another form of demand curve, and that is the contour of the customers' demands of your product or service. How comforting it would be if it were flat, but as Bezos noted in his annual letter to shareholders, the arc of customer demands is long, but it bends ever upwards. It's the job of each company, especially its product team, to continue to be in tune with the topology of this "demand curve."

I see many companies spend time analyzing funnels and seeing who emerges out the bottom. As a company grows, though, and from the start, it's just as important to look at those who never make it through the funnel, or who jump out of it at the very top. If the product market fit gradient likely differs for each of your current and potential customer segments, and understanding how and why is a never-ending job.

When companies run focus groups on their products, they often show me the positive feedback. I'm almost invariably more interested in the folks who've registered negative feedback, though I sense many product teams find watching that material to be stomach-churning. Sometimes the feedback isn't useful in the moment; perhaps you have such strong product-market fit with a different cohort that it isn't useful. Still, it's never not a bit of a prick to the ego.

However, all honest negative feedback forms the basis of some asymptote in some customer segment, even if the constraint isn't constricting yet. Even if companies I meet with don't yet have an idea of how to deal with a problem, I'm always curious to see if they have a good explanation for what that problem is.

One important sidenote on this topic is that I'm often invited to give product feedback, more than I can find time for these days. When I'm doing so in person, some product teams can't help but jump in as soon as I raise any concerns, just to show they've already anticipated my objections.

I advise just listening all the way through the first time, to hear the why of someone's feedback, before cutting them off. You'll never be there in person with each customer to talk them out of their reasoning, your product or service has to do that work. The batting average of product people who try to explain to their customers why they're wrong is...not good. It's a sure way to put them off of giving you feedback in the future, too.

Even absent external feedback, it's possible to train yourself to spot the limits to your product. One approach I've taken when talking to companies who are trying to achieve initial or new product-market fit is to ask them why every person in the world doesn't use their product or service. If you ask yourself that, you'll come up with all sorts of clear answers, and if you keep walking that road you'll find the borders of your TAM taking on greater and greater definition.

[It's true that you also need the flip side, an almost irrational positivity, to be able to survive the difficult task of product development, or to be an entrepreneur, but selection bias is such that most such people start with a surplus of optimism.]

Lastly, though I hesitate to share this, it is possible to avoid invisible asymptotes through sheer genius of product intuition. I balk for the same reason I cringe when I meet CEO's in the valley who idolize Steve Jobs. In many ways, a product intuition that is consistently accurate across time is, like Steve Jobs, a unicorn. It's so rare an ability that to lean entirely on it is far more dangerous and high risk than blending it with a whole suite of more accessible strategies.

It's difficult for product people to hear this because there's something romantic and heroic about the Steve Jobs mythology of creation, brilliant ideas springing from the mind of the mad genius and inventor. However, just to read a biography of Jobs is to understand how rare a set of life experiences and choices shaped him into who he was. Despite that, we've spawned a whole bunch of CEO's who wear the same outfit every day and drive their design teams crazy with nitpick design feedback as if the outward trappings of the man were the essence of his skill. We vastly underestimate the path dependence of his particular style of product intuition.

Jobs' gift is so rare that it's likely even Apple hasn't been able to replace it. It's not a coincidence that the Apple products that frustrate me the most right now are all the ones with "Pro" in the name. The MacBook Pro, with its flawed keyboard and bizarre Touch Bar (I'm still using the old 13" MacBook Pro with the old keyboard, hoping beyond hope that Apple will come to its senses before it becomes obsolete). The Mac Pro, which took on the unfortunately apropos shape of a trash can in its last incarnation and whose replacement hasn't shipped in years (I'm still typing this at home on an ancient cheese grater Mac Pro tower and ended up building a PC tower to run VR and to do photo and video editing). Final Cut Pro, which I learned on in film editing school, and which got zapped in favor of Final Cut X just when FCP was starting to steal meaningful share in Hollywood from Avid. The iMac Pro, which isn't easily upgradable but great if you're a gazillionaire.

Pro customers are typically ones with the most clearly specified needs and workflows. Thus, their products are ones for whom listening to them articulate what they want is a reliable path to establishing and maintaining product-market fit. But that's not something Apple seems to enjoy doing, and so the mis-steps they've made on these lines are exactly the types of mistakes you'd expect of them.

[I was overjoyed to read that Apple's next Mac Pro is being built using extensive feedback from media professionals. It's disappointing that it won't arrive until 2019 now but at least Apple has descended from the ivory tower to talk to the actual future users. It's some of the best news out of Apple I've heard in forever.]

Live by intuition, die by it. It's not surprising that Snapchat, another company that lives by the product intuition of one person, stumbled with a recent redesign. That a company's strengths are its weaknesses is simply the result of tight adherence to methodology. Apple and Snapchat's deus ex machina style of handing down products also rid us of CD-ROM drives and produced the iPhone, AirPods, the camera-first messaging app, and the Story format, among many other breakthroughs which a product person could hang a career on.

Because products and services live in an increasingly dynamic world, especially those targeted at consumers, they aren't governed by the immutable, timeless truths of a field like mathematics. The reason I recommend a healthy mix of intuition informed by data and feedback is that most product people I know have a product view that is slower moving than the world itself. If they've achieved any measure of success, it's often because their view of some consumer need was the right one at the right time. Product-market fit as tautology. Selection bias in looking at these people might confuse some measure of luck with some enduring product intuition.

However, just as a VC might have gotten lucky once with some investment and be seen as a genius for life (and the returns to a single buff of a VC brand name is shockingly durable), just because a given person's product intuition might hit on the right moment at the right point in history to create a smash hit, it's rare that a single person's frame will move in lock step with that of the world. How many creatives are relevant for a lifetime?

This is one reason sustained competitive advantage is so difficult. In the long run, endogenous variance in the quality of product leadership in a company always seems to be in the negative direction. But perhaps we are too focused on management quality and not focused enough on exogenous factors. In "Divine Discontent: Disruption’s Antidote," Ben Thompson writes:

Bezos’s letter, though, reveals another advantage of focusing on customers: it makes it impossible to overshoot. When I wrote that piece five years ago, I was thinking of the opportunity provided by a focus on the user experience as if it were an asymptote: one could get ever closer to the ultimate user experience, but never achieve it:

stratechery-disruption-diagram-1.png

In fact, though, consumer expectations are not static: they are, as Bezos’ memorably states, “divinely discontent”. What is amazing today is table stakes tomorrow, and, perhaps surprisingly, that makes for a tremendous business opportunity: if your company is predicated on delivering the best possible experience for consumers, then your company will never achieve its goal.

stratechery-disruption-diagram-2.png

In the case of Amazon, that this unattainable and ever-changing objective is embedded in the company’s culture is, in conjunction with the company’s demonstrated ability to spin up new businesses on the profits of established ones, a sort of perpetual motion machine; I’m not sure that Amazon will beat Apple to $1 trillion, but they surely have the best shot at two.

Pattern recognition is the default operation mode of much of Silicon Valley and other fields, but it is almost always, by its very nature, backwards-looking. One can hardly blame most people for resorting to it because it's a way of minimizing blame, and the economic returns of the Valley are so amplified by the structural advantages of winners that even matching market beta makes for a comfortable living.

However, if consumer desires are shifting, it's always just a matter of time before pattern recognition leads to an invisible asymptote. One reason startups are often the tip of the spear for innovation in technology is that they can't rely on market beta to just roll along. Achieving product-market fit for them is an existential challenge, and they have no backup plans. Imagine an investor who has to achieve alpha to even survive.

Companies can stay nimble by turning over its product leaders, but as a product professional, staying relevant to the marketplace is a never-ending job, even if your own life is irreversible and linear. I find the best way to unmoor myself from my most strongly held product beliefs is to increase my inputs. Besides, the older I get, the more I've grown to enjoy that strange dance with the customer. Leading a partner in a dance may give you a feeling of control, but it's a world of difference from dancing by yourself.

One of my favorite Ben Thompson posts is "What Clayton Christensen Got Wrong" in which he built on Christensen's theory of disruption to note that low end disruption can be avoided if you can differentiate on user experience. It is difficult and perhaps even impossible to over-serve on that axis. Tesla came into the electric car market with a car that was way more expensive than internal combustion engine cars (this definitely wasn't low-end disruption), had shorter range, and required really slow charging at a time when very few public chargers existed yet.

However, Tesla got an interesting foothold because on another axis it really delivered. Yes, the range allowed for more commuting without having to charge twice a day, but more importantly, for the wealthy, it was a way to signal one's environmental consciousness in a package that was much, much sexier than the Prius, the previous electric car of choice of celebrities in LA. It will be hard for Tesla to continue to rely on that in the long run as the most critical dimension of user experience will likely evolve, but it's a good reminder that "user experience" is broad enough to encompass many things, some less measurable than others.

You can't overserve on user experience, Thompson argues; as a product person, I'd argue, in parallel, that it is difficult and likely impossible to understand your customer too deeply. Amazon's mission to the be the world's most customer-centric company is inherently a long-term strategy because it is a one with an infinite time scale and no asymptote to its slope.

In my experience, the most successful people I know are much more conscious of their own personal asymptotes at a much earlier age than others. They ruthlessly and expediently flush them out. One successful person I know determined in grade school that she'd never be a world-class tennis player or pianist. Another mentioned to me how, in their freshman year of college, they realized they'd never be the best mathematician in their own dorm, let alone in the world. Another knew a year into a job that he wouldn't be the best programmer at his company and so he switched over into management; he rose to become CEO.

By discovering their own limitations early, they are also quicker to discover vectors on which they're personally unbounded. Product development will always be a multi-dimensional problem, often frustratingly so, but the value of reducing that dimensionality often costs so little that it should be more widely employed.

This isn't to say a person needs to aspire to be the best at everything they do. I'm at peace with the fact that I'll likely always be a middling cook, that I won't win the Tour de France, and that I'm destined to be behind a camera and not in front of it. When it comes to business, however, and surviving in the ruthless Hobbesian jungle, where much more is winner-take-all than it once was, the idea that you can be whatever you want to be, or build whatever you want to build, is a sure path to a short, unhappy existence.

10 browser tabs

1. Love in the Time of Robots

“Is it difficult to play with her?” the father asks. His daughter looks to him, then back at the android. Its mouth begins to open and close slightly, like a dying fish. He laughs. “Is she eating something?”
 
The girl does not respond. She is patient and obedient and listens closely. But something inside is telling her to resist. 
 
“Do you feel strange?” her father asks. Even he must admit that the robot is not entirely believable.
 
Eventually, after a few long minutes, the girl’s breathing grows heavier, and she announces, “I am so tired.” Then she bursts into tears.
 
That night, in a house in the suburbs, her father uploads the footage to his laptop for posterity. His name is Hiroshi Ishi­guro, and he believes this is the first record of a modern-day android.
 

Reads like the treatment for a science fiction film, some mashup of Frankenstein, Pygmalion, and Narcissus. One incredible moment after another, and I'll grab just a few excerpts, but the whole thing is worth reading.

But he now wants something more. Twice he has witnessed others have the opportunity, however confusing, to encounter their robot self, and he covets that experience. Besides, his daughter was too young, and the newscaster, though an adult, was, in his words, merely an “ordinary” person: Neither was able to analyze their android encounter like a trained scientist. A true researcher should have his own double. Flashing back to his previous life as a painter, Ishi­guro thinks: This will be another form of self-portrait. He gives the project his initials: Geminoid HI. His mechanical twin.
 

Warren Ellis, in a recent commencement speech delivered at the University of Essex, said:

Nobody predicted how weird it’s gotten out here.  And I’m a science fiction writer telling you that.  And the other science fiction writers feel the same.  I know some people who specialized in near-future science fiction who’ve just thrown their hands up and gone off to write stories about dragons because nobody can keep up with how quickly everything’s going insane.  It’s always going to feel like being thrown in the deep end, but it’s not always this deep, and I’m sorry for that.
 

The thing is, far future sci-fi is likely to be even more off base now given how humans are evolving in lock step with the technology around them. So we need more near future sci-fi, of a variety smarter than Black Mirror, to grapple with the implications.

Soon his students begin comparing him to the Geminoid—“Oh, professor, you are getting old,” they tease—and Ishi­guro finds little humor in it. A few years later, at 46, he has another cast of his face made, to reflect his aging, producing a second version of HI. But to repeat this process every few years would be costly and hard on his vanity. Instead, Ishi­guro embraces the logi­cal alternative: to alter his human form to match that of his copy. He opts for a range of cosmetic procedures—laser treatments and the injection of his own blood cells into his face. He also begins watching his diet and lifting weights; he loses about 20 pounds. “I decided not to get old anymore,” says Ishi­guro, whose English is excellent but syntactically imperfect. “Always I am getting younger.”
 
Remaining twinned with his creation has become a compulsion. “Android has my identity,” he says. “I need to be identical with my android, otherwise I’m going to lose my identity.” I think back to another photo of his first double’s construction: Its robot skull, exposed, is a sickly yellow plastic shell with openings for glassy teeth and eyeballs. When I ask what he was thinking as he watched this replica of his own head being assembled, Ishi­guro says, perhaps only half-joking, “I thought I might have this kind of skull if I removed my face.”
 
Now he points at me. “Why are you coming here? Because I have created my copy. The work is important; android is important. But you are not interested in myself.”
 

This should be some science fiction film, only I'm not sure who our great science fiction director is. The best examples may be too old to want to look upon such a story as anything other than grotesque and horrific.

2. Something is wrong on the internet by James Bridle

Of course, some of what's on the internet really is grotesque and horrific. 

Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level. 
 

Given how much my nieces love watching product unwrapping and Peppa the Pig videos on YouTube, this story was induced a sense of dread I haven't felt since the last good horror film I watched, which I can't remember anymore since the world has run a DDOS on my emotions.

We often think of a market operating at peak efficiency as sending information back and forth between supply and demand, allowing the creation of goods that satisfy both parties. In the tech industry, the wink-wink version of that is saying that pornography leads the market for any new technology, solving, as it does, the two problems the internet is said to solve better, at scale, than any medium before it: loneliness and boredom.

Bridle's piece, however, finds the dark cul-de-sacs and infected runaway processes which have branched out from the massive marketplace that is YouTube. I decided to follow a Peppa the Pig video on the service and started tapping on Related Videos, like I imagine one of my nieces doing, and quickly wandered into a dark alleyway where I saw some video which I would not want any of them watching. As Bridle did, I won't link to what I found; suffice to say it won't take you long to stumble on some of it if you want, or perhaps even if you don't.

What's particularly disturbing is the somewhat bizarre, inexplicably grotesque nature of some of these video remixes. David Cronenberg is known for his body horror films; these YouTube videos are like some perverse variant of that, playing with popular children's iconography.

Facebook and now Twitter are taking heat for disseminating fake news, and that is certainly a problem worth debating, but with that problem we're talking about adults. Children don't have the capacity to comprehend what they're seeing, and given my belief in the greater effect of sight, sound, and motion, I am even more disturbed by this phenomenon.

A system where it's free to host videos to a global audience, where this type of trademark infringement weaponizes brand signifiers with seeming impunity, married with increasingly scalable content production and remixes using technology, allows for the type of scalable problem we haven't seen before.

The internet has enabled all types of wonderful things at scale; we should not be surprised that it would foster the opposite. But we can, and should, be shocked.

3. FDA approves first blood sugar monitor without finger pricks

This is exciting. One view which seems to be common wisdom these days when it comes to health is that it's easier to lose weight and impact your health through diet than exercise. But one of the problems of the feedback loop in diet (and exercise, actually) is how slow it is. You sneak a few snacks here and there walking by the company cafeteria every day, and a month later you hop on the scale and emit a bloodcurdling scream as you realize you've gained 8 pounds.

A friend of mine had gestational diabetes during one of her pregnancies and got a home blood glucose monitor. You had to prick your finger and draw blood to get your blood glucose reading, but curious, I tried it before and after a BBQ.

To see what various foods did to my blood sugar in near real-time was a real eye-opener. Imagine in the future when one could see what a few french fries and gummy bears did to your blood sugar, or when the reading could be built into something like an Apple Watch, without having to draw blood each time. I don't mind the sight of blood, but I'd prefer not to turn my finger tips into war zones.

Faster feedback might transform dieting into something more akin to deliberate practice. Given that another popular theory of obesity is that it's an insulin phenomenon, tools like this, built for diabetes, might have much mass market impact.

4.  Ingestable ketones

Ingestable ketones have been a recent sort of holy grail for endurance athletes, and now HVMN is bringing one to market. Ketogenic diets are all the rage right now, but for an endurance athlete, the process of being able to fuel oneself on ketones has always sounded like a long and miserable process.

The body generates ketones from fat when low on carbs or from fasting. The theory is that endurance athletes using ketones rather than glycogen from carbs require less oxygen and thus can work out longer.

I first heard about the possibility of exogenous ketones for athletes from Peter Attia. As he said then, perhaps the hardest thing about ingesting exogenous ketones is the horrible taste, which caused him to gag and nearly vomit in his kitchen. It doesn't sound like the taste problem has been solved.

Until we get the pill that renders exercise obsolete, however, I'm curious to give this a try. If you decide to pre-order, you can use my referral code to get $15 off.

5. We Are Nowhere Close to the Limits of Athletic Performance

By comparison, the potential improvements achievable by doping effort are relatively modest. In weightlifting, for example, Mike Israetel, a professor of exercise science at Temple University, has estimated that doping increases weightlifting scores by about 5 to 10 percent. Compare that to the progression in world record bench press weights: 361 pounds in 1898, 363 pounds in 1916, 500 pounds in 1953, 600 pounds in 1967, 667 pounds in 1984, and 730 pounds in 2015. Doping is enough to win any given competition, but it does not stand up against the long-term trend of improving performance that is driven, in part, by genetic outliers. As the population base of weightlifting competitors has increased, outliers further and further out on the tail of the distribution have appeared, driving up world records.
 
Similarly, Lance Armstrong’s drug-fuelled victory of the 1999 Tour de France gave him a margin of victory over second-place finisher Alex Zulle of 7 minutes, 37 seconds, or about 0.1 percent.3 That pales in comparison to the dramatic secular increase in speeds the Tour has seen over the past half century: Eddy Merckx won the 1971 tour, which was about the same distance as the 1999 tour, in a time 5 percent worse than Zulle’s. Certainly, some of this improvement is due to training methods and better equipment. But much of it is simply due to the sport’s ability to find competitors of ever more exceptional natural ability, further and further out along the tail of what’s possible.
 

In the Olympics, to take the most celebrated athletic competition, victors are celebrated with videos showing them swimming laps, tossing logs in a Siberian tundra, running through a Kenyan desert. We celebrate the work, the training. Good genes are given narrative short shrift. Perhaps we should show a picture of their DNA, just to give credit where much credit is due?

If I live a normal human lifespan, I expect to live to see special sports leagues and divisions created for athletes who've undergone genetic modification in the future. It will be the return of the freak show at the circus, but this time for real. I've sat courtside and seen people like Lebron James, Giannis Antetokounmpo, Kevin Durant, and Joel Embiid walk by me. They are freaks, but genetic engineering might produce someone who stretch our definition of outlier.

In other words, it is highly unlikely that we have come anywhere close to maximum performance among all the 100 billion humans who have ever lived. (A completely random search process might require the production of something like a googol different individuals!)
 
But we should be able to accelerate this search greatly through engineering. After all, the agricultural breeding of animals like chickens and cows, which is a kind of directed selection, has easily produced animals that would have been one in a billion among the wild population. Selective breeding of corn plants for oil content of kernels has moved the population by 30 standard deviations in roughly just 100 generations.6 That feat is comparable to finding a maximal human type for a specific athletic event. But direct editing techniques like CRISPR could get us there even faster, producing Bolts beyond Bolt and Shaqs beyond Shaq.
 

6. Let's set half a percent as the standard for statistical significance

My many-times-over coauthor Dan Benjamin is the lead author on a very interesting short paper "Redefine Statistical Significance." He gathered luminaries from many disciplines to jointly advocate a tightening of the standards for using the words "statistically significant" to results that have less than a half a percent probability of occurring by chance when nothing is really there, rather than all results that—on their face—have less than a 5% probability of occurring by chance. Results with more than a 1/2% probability of occurring by chance could only be called "statistically suggestive" at most. 
 
In my view, this is a marvelous idea. It could (a) help enormously and (b) can really happen. It can really happen because it is at heart a linguistic rule. Even if rigorously enforced, it just means that editors would force people in papers to say "statistically suggestive for a p of a little less than .05, and only allow the phrase "statistically significant" in a paper if the p value is .005 or less. As a well-defined policy, it is nothing more than that. Everything else is general equilibrium effects.
 

Given the replication crisis has me doubting almost every piece of conventional wisdom I've inherited in my life, I'm okay with this.

7. We're surprisingly unaware of when our own beliefs change

If you read an article about a controversial issue, do you think you’d realise if it had changed your beliefs? No one knows your own mind like you do – it seems obvious that you would know if your beliefs had shifted. And yet a new paper in The Quarterly Journal of Experimental Psychology suggests that we actually have very poor “metacognitive awareness” of our own belief change, meaning that we will tend to underestimate how much we’ve been swayed by a convincing article.
 
The researchers Michael Wolfe and Todd Williams at Grand Valley State University said their findings could have implications for the public communication of science. “People may be less willing to meaningfully consider belief inconsistent material if they feel that their beliefs are unlikely to change as a consequence,” they wrote.
 

Beyond being an interesting result, I link to this as an example of a human readable summary of a research paper. This his how this article summarize the research study and its results:

The researchers recruited over two hundred undergrads across two studies and focused on their beliefs about whether the spanking/smacking of kids is an effective form of discipline. The researchers chose this topic deliberately in the hope the students would be mostly unaware of the relevant research literature, and that they would express a varied range of relatively uncommitted initial beliefs.
 
The students reported their initial beliefs about whether spanking is an effective way to discipline a child on a scale from “1” completely disbelieve to “9” completely believe. Several weeks later they were given one of two research-based texts to read: each was several pages long and either presented the arguments and data in favour of spanking or against spanking. After this, the students answered some questions to test their comprehension and memory of the text (these measures varied across the two studies). Then the students again scored their belief in whether spanking is effective or not (using the same 9-point scale as before). Finally, the researchers asked them to recall what their belief had been at the start of the study.
 
The students’ belief about spanking changed when they read a text that argued against their own initial position. Crucially, their memory of their initial belief was shifted in the direction of their new belief – in fact, their memory was closer to their current belief than their original belief. The more their belief had changed, the larger this memory bias tended to be, suggesting the students were relying on their current belief to deduce their initial belief. The memory bias was unrelated to the measures of how well they’d understood or recalled the text, suggesting these factors didn’t play a role in memory of initial belief or awareness of belief change.
 

Compare this link above to the abstract of the paper itself:

When people change beliefs as a result of reading a text, are they aware of these changes? This question was examined for beliefs about spanking as an effective means of discipline. In two experiments, subjects reported beliefs about spanking effectiveness during a prescreening session. In a subsequent experimental session, subjects read a one-sided text that advocated a belief consistent or inconsistent position on the topic. After reading, subjects reported their current beliefs and attempted to recollect their initial beliefs. Subjects reading a belief inconsistent text were more likely to change their beliefs than those who read a belief consistent text. Recollections of initial beliefs tended to be biased in the direction of subjects’ current beliefs. In addition, the relationship between the belief consistency of the text read and accuracy of belief recollections was mediated by belief change. This belief memory bias was independent of on-line text processing and comprehension measures, and indicates poor metacognitive awareness of belief change.
 

That's actually one of the better research abstracts you'll read and still it reflects the general opacity of the average research abstract. I'd argue that some of the most important knowledge in the world is locked behind abstruse abstracts.

Why do researchers write this way? Most tell me that researchers write for other researchers, and incomprehensible prose like this impresses their peers. What a tragedy. As my longtime readers know, I'm a firm believer in the power of the form of a message. We continue to underrate that in all aspects of life, from the corporate world to our personal lives, and here, in academia.

Then again, such poor writing keeps people like Malcolm Gladwell busy transforming such insight into breezy reads in The New Yorker and his bestselling books.

8. Social disappointment explains chimpanzees' behaviour in the inequity aversion task

As an example of the above phenomenon, this paper contains an interesting conclusion, but try to parse this abstract:

Chimpanzees’ refusal of less-preferred food when an experimenter has previously provided preferred food to a conspecific has been taken as evidence for a sense of fairness. Here, we present a novel hypothesis—the social disappointment hypothesis—according to which food refusals express chimpanzees' disappointment in the human experimenter for not rewarding them as well as they could have. We tested this hypothesis using a two-by-two design in which food was either distributed by an experimenter or a machine and with a partner present or absent. We found that chimpanzees were more likely to reject food when it was distributed by an experimenter rather than by a machine and that they were not more likely to do so when a partner was present. These results suggest that chimpanzees’ refusal of less-preferred food stems from social disappointment in the experimenter and not from a sense of fairness.
 

Your average grade school English teacher would slap a failing grade on this butchery of the English language.

9. Metacompetition: Competing Over the Game to be Played

When CDMA-based technologies took off in the US, companies like QualComm that work on that standard prospered; metacompetitions between standards decide the fates of the firms that adopt (or reject) those standards.

When an oil spill raises concerns about the environment, consumers favor businesses with good environmental records; metacompetitions between beliefs determine the criteria we use to evaluate whether a firm is “good.”

If a particular organic foods certification becomes important to consumers, companies with that certification are favored; metacompetitions between certifications determines how the quality of firms is measured.
 
In all these examples, you could be the very best at what you do, but lose in the metacompetition over what criteria will matter. On the other hand, you may win due to a metacompetition that protects you from fierce rivals who play a different game.
 
Great leaders pay attention to metacompetition. They advocate the game they play well, promoting criteria on which they measure up. By contrast, many failed leaders work hard at being the best at what they do, only to throw up their hands in dismay when they are not even allowed to compete. These losers cannot understand why they lost, but they have neglected a fundamental responsibility of leadership. It is not enough to play your game well. In every market in every country, alternative “logics” vie for prominence. Before you can win in competition, you must first win the metacompetition over the game being played.
 

In sports negotiations between owners and players, the owners almost always win the metacompetition game. In the writer's strike in Hollywood in 2007, the writer's guild didn't realize they were losing the metacompetition and thus ended up worse off than before. Amazon surpassed eBay by winning the retail metacompetition (most consumers prefer paying a good, fixed price for a good of some predefined quality than dealing with the multiple axes of complexity of an auction) after first failing at tackling eBay on its direct turf of auctions.

Winning the metacompetition means first being aware of what it is. It's not so easy in a space like, say, social networking, where even some of the winners don't understand what game they're playing.

10. How to be a Stoic

Much of Epictetus’ advice is about not getting angry at slaves. At first, I thought I could skip those parts. But I soon realized that I had the same self-recriminatory and illogical thoughts in my interactions with small-business owners and service professionals. When a cabdriver lied about a route, or a shopkeeper shortchanged me, I felt that it was my fault, for speaking Turkish with an accent, or for being part of an élite. And, if I pretended not to notice these slights, wasn’t I proving that I really was a disengaged, privileged oppressor? Epictetus shook me from these thoughts with this simple exercise: “Starting with things of little value—a bit of spilled oil, a little stolen wine—repeat to yourself: ‘For such a small price, I buy tranquillity.’ ”
 
Born nearly two thousand years before Darwin and Freud, Epictetus seems to have anticipated a way out of their prisons. The sense of doom and delight that is programmed into the human body? It can be overridden by the mind. The eternal war between subconscious desires and the demands of civilization? It can be won. In the nineteen-fifties, the American psychotherapist Albert Ellis came up with an early form of cognitive-behavioral therapy, based largely on Epictetus’ claim that “it is not events that disturb people, it is their judgments concerning them.” If you practice Stoic philosophy long enough, Epictetus says, you stop being mistaken about what’s good even in your dreams.
 

The trendiness of stoicism has been around for quite some time now. I found this tab left over from 2016, and I'm sure Tim Ferriss was espousing it long before then, and not to mention the enduring trend that is Buddhism. That meditation and stoicism are so popular in Silicon Valley may be a measure of the complacency of the region; these seem direct antidotes to the most first world of problems. People everywhere complain of the stresses on their mind from the deluge of information they receive for free from apps on the smartphone with processing power that would put previous supercomputers to shame.

Still, given that stoicism was in vogue in Roman times, it seems to have stood the test of time. Since social media seems to have increased the surface area of our social fabric and our exposure to said fabric, perhaps we could all use a bit more stoicism in our lives. I suspect one reason Curb Your Enthusiasm curdles in the mouth more than before is not just that his rich white man's complaints seem particularly ill timed in the current environment but that he is out of touch with the real nature of most people's psychological stressors now. A guy of his age and wealth probably doesn't spend much time on social media, but if he did, he might realize his grievances no longer match those of the average person in either pettiness or peculiarity.

My most popular posts

I recently started collecting email addresses using MailChimp for those readers who want to receive email updates when I post here. Given my relatively low frequency of posts these days, especially compared to my heyday when I posted almost daily, and given the death of RSS, such an email list may have more value than it once did. You can sign up for that list from my About page.

I've yet to send an email to the list successfully yet, but let's hope this post will be the first to go out that route. Given this would be the first post to that list, with perhaps some new readers, I thought it would be worth compiling some of my more popular posts in one place.

Determining what those are proved difficult, however. I never checked my analytics before, since this is just a hobby, and I realized when I went to the popular content panel on Squarespace that their data only goes back a month. I also don't have data from the Blogger or Movable Type eras of my blog stashed anywhere, and I never hooked up Google Analytics here.

A month's worth of data was better than nothing, as some of the more popular posts still get a noticeable flow of traffic each month, at least by my modest standards. I also ran a search on Twitter for my URL and used that as a proxy for social media popularity of my posts (and in the process, found some mentions I'd never seen before since they didn't include my Twitter handle; is there a way on Twitter to get a notification every time your domain is referenced?).

In compiling the list, I went back and reread these posts for the first time in ages added a few thoughts on each.

  • Compress to Impress — my most recent post is the one that probably attracted most of the recent subscribers to my mailing list. I regret not including one of the most famous cinematic examples of rhetorical compression, from The Social Network, when Justin Timberlake's Sean Parker tells Jesse Eisenberg, "Drop the "The." Just Facebook. It's cleaner." Like much of the movie, probably made up (and also, why wasn't the movie titled just Social Network?), but still a good example how movies almost always compress the information to be visually compact scenes. The reason people tend to like the book better than the movie adaptation in almost every case is that, like Jeff Bezos and his dislike of Powerpoint, people who see both original and compressed information flows feel condescended and lied to by the latter. On the other hand, I could only make it through one and a half of the Game of Thrones novels so I much prefer the TV show's compression of that story, even as I watch every episode with super fans who can spend hours explaining what I've missed, so it feels like I have read the books after all.
  • Amazon, Apple, and the beauty of low margins — one of the great things about Apple is it attracts many strong, independent critics online (one of my favorites being John Siracusa). The other of the FAMGA tech giants (Facebook, Amazon, Microsoft, Google) don't seem to have as many dedicated fans/analysts/critics online. Perhaps it was that void that helped this post on Amazon from 2012 to go broad (again, by my modest standards). Being able to operate with low margins is not, in and of itself, enough to be a moat. Anyone can lower their prices, and more generally, any company should be wary of imitating any company's high variance strategy, lest they forget all the others who did and went extinct (i.e., a unicorn is a unicorn because it's a unicorn, right?). Being able to operate with low margins with unparalleled operational efficiency, at massive scale globally, while delivering more SKUs in more shipments with more reliability and greater speed than any other retailer is a competitive moat. Not much has changed, by the way. Apple just entered the home voice-controlled speaker market with its announcement of the HomePod and is coming in from above, as expected, at $349, as the room under Amazon's price umbrella isn't attractive.
  • Amazon and the profitless business model fallacy — the second of my posts on Amazon to get a traffic spike. It's amusing to read some of the user comments on this piece and recall a time when every time I said anything positive about Amazon I'd be inundated with comments from Amazon shorts and haters. Which is the point of the post, that people outside of Amazon really misunderstood the business model. The skeptics have largely quieted down nowadays, and maybe the shorts lost so much money that they finally went in search of weaker prey, but in some ways I don't blame the naysayers. Much of their misreading of Amazon is the result of GAAP rules which really don't reveal enough to discern how much of a company's losses are due to investments in future businesses or just aggressive depreciation of assets. GAAP rules leave a lot of wiggle room to manipulate your numbers to mask underlying profitability, especially when you have a broad portfolio of businesses munged together into single line items on the income statement and balance sheet. This doesn't absolve professional analysts who should know better than to ignore unit economics, however. Deep economic analysis isn't a strength of your typical tech beat reporter, which may explain the rise of tech pundits who can fill that gap. I concluded the post by saying that Amazon's string of quarterly losses at the time should worry its competitors more than it should assure them. That seems to have come to fruition. Amazon went through a long transition period from having a few very large fulfillment centers to having many many more smaller ones distributed more broadly, but generally located near major metropolitan areas, to improve its ability to ship to customers more quickly and cheaply. Now that the shift has been completed for much of the U.S., you're seeing the power of the fully operational Death Star, or many tiny ones, so to speak.
  • Facebook hosting doesn't change things, the world already changed — the title feels clunky, but the analysis still holds up. I got beat up by some journalists over this piece for offering a banal recommendation for their malady (focus on offering differentiated content), but if the problem were so tractable it wouldn't be a problem.
  • The network's the thing — this is from 2015, and two things come to mind since I wrote it.
    • As back then, Instagram has continued to evolve and grow, and Twitter largely has not and has not. Twitter did stop counting user handles against character limits and tried to alter its conversation UI to be more comprehensible, but the UI's still inscrutable to most. The biggest change, to an algorithmic rather than reverse chronological timeline, was an improvement, but of course Instagram had beat them to that move as well. The broader point is still that the strength of any network lies most in the composition of its network, and in that, Twitter and other networks that have seened flattening growth, like Snapchat or Pinterest, can take solace. Twitter is the social network for infovores like journalists, technorati, academics, and intellectual introverts, and that's a unique and influential group. Snapchat has great market share among U.S. millennials and teens, Pinterest among women. It may be hard for them to break out of those audiences, but those are wonderfully differentiated audiences, and it's also not easy for a giant like Facebook to cater to particular audiences when its network is so massive. Network scaling requires that a network reduce the surface area of its network to each individual user using strategies like algorithmic timelines, graph subdivision (e.g., subreddits), and personalization, otherwise networks run into reverse economies of scale in their user experience.
    • The other point that this post recalls is the danger of relying on any feature as a network moat. People give Instagram, Messenger, FB, and WhatsApp grief for copying Stories from Snapchat, but if any social network has to pin its future on any single feature, all of which are trivial to replicate in this software age, that company has a dim future. The differentiator for a network is how its network uses a features to strengthen the bonds of that network, not the feature itself. Be wary of hanging your hat on an overnight success of a feature the same way predators should be wary of mutations that offer temporary advantages over their prey. The Red Queen effect is real and relentless.
  • Tower of Babel — From earlier this year, and written at a time when I was quite depressed about a reversal in the quality of discourse online, and how the promise of connecting everyone via the internet had quickly seemed to lead us all into a local maximum (minimum?) of public interaction. I'm still bullish on the future, but when the utopian dreams of global connection run into the reality of human's coalitional instincts and the resentment from global inequality, we've seen which is the more immovable object. Perhaps nothing expresses the state of modern discourse like waking up to see so many of my followers posting snarky responses to one of Trump's tweets. Feels good, accomplishes nothing, let's all settle for the catharsis of value signaling. I've been guilty of this, and we can do better.
  • Thermodynamic theory of evolution — actually, this isn't one of my most popular posts, but I'm obsessed with the second law of thermodynamics and exceptions to it in the universe. Modeling the world as information feels like something from the Matrix but it has reinvigorated my interest in the physical universe.
  • Cuisine and empire — on the elevation of food as scarce cultural signal over music. I'll always remember this post because Tyler Cowen linked to it from Marginal Revolution. Signalling theory is perhaps one of the three most influential ideas to have changed my thinking in the past decade. I would not underestimate its explanatory power in the rise of Tesla. Elon Musk and team made the first car that allowed wealthy people to signal their environmental values without having to also send a conflicting signal about their taste in cars. It's one example where actually driving one of the uglier, less expensive EV's probably would send the stronger signal, whereas generally the more expensive and useless a signal the more effective it is.
  • Your site has a self-describing cadence — I'm fond of this one, though Hunter Walk has done so much more to point to this post than anyone that I feel like I should grant him a perpetual license to call it his own. It still holds true, almost every service and product I use online trains me how often to return. The only unpleasant part of rereading this is realizing how my low posting frequency has likely trained my readers to never visit my blog anymore.
  • Learning curves sloping up and down — probably ranks highly only because I have such a short window of data from Squarespace to examine, but I do think that companies built for the long run have to come to maintain a sense of the slope of their organization's learning curve all the time, especially in technology where the pace of evolution and thus the frequency of existential decisions is heightened.
  • The paradox of loss aversion — more tech markets than ever are winner-take-all because the internet is the most powerful and scalable multiplier of network effects in the history of the world. Optimal strategy in winner-take-all contests differs quite a bit from much conventional business strategy, so best recognize when you're playing in one.
  • Federer and the Paradox of Skill — the paradox of skill is a term I first learned from Michael Mauboussin's great book The Success Equation. This post applied it to Roger Federer, and if he seems more at peace recently, now that he's older and more evenly matched in skill to other top players, it may be that he no longer feels subject to the outsized influence of luck as he did when he was a better player. In Silicon Valley, with all its high achieving, brilliant people, understanding the paradox of skill may be essential to feeling jealous of every random person around you who fell into a pool of money. The Paradox of Skill is a cousin to The Red Queen effect, which I referenced above and which tech workers of the Bay Area should familiarize themselves with. It explains so much of the tech sector but also just living in the Bay Area. Every week I get a Curbed newsletter, and it always has a post titled "What $X will get you in San Francisco" with a walkthrough of a recent listing that you could afford on that amount of monthly rent. Over time they've had to elevate the dollar amount just to keep things interesting, or perhaps because what $2900 can rent in you in SF was depressing its readers.

Having had this blog going off and on since 2001, I only skimmed through through a fraction of the archives, but perhaps at some point I'll cringe and crawl back further to find other pieces that still seem relevant.

Thermodynamic theory of evolution

The teleology and historical contingency of biology, said the evolutionary biologist Ernst Mayr, make it unique among the sciences. Both of these features stem from perhaps biology’s only general guiding principle: evolution. It depends on chance and randomness, but natural selection gives it the appearance of intention and purpose. Animals are drawn to water not by some magnetic attraction, but because of their instinct, their intention, to survive. Legs serve the purpose of, among other things, taking us to the water.
 
Mayr claimed that these features make biology exceptional — a law unto itself. But recent developments in nonequilibrium physics, complex systems science and information theory are challenging that view.
 
Once we regard living things as agents performing a computation — collecting and storing information about an unpredictable environment — capacities and considerations such as replication, adaptation, agency, purpose and meaning can be understood as arising not from evolutionary improvisation, but as inevitable corollaries of physical laws. In other words, there appears to be a kind of physics of things doing stuff, and evolving to do stuff. Meaning and intention — thought to be the defining characteristics of living systems — may then emerge naturally through the laws of thermodynamics and statistical mechanics.
 

One of the most fascinating reads of my past half year.

I recently linked to the a short piece by Pinker on how an appreciation for the second law of thermodynamics might help one come to some peace with the entropy of the world. It's inevitable, so don't blame yourself.

And yet there is something beautiful about life in its ability to create pockets of order and information amidst the entropy and chaos.

A genome, then, is at least in part a record of the useful knowledge that has enabled an organism’s ancestors — right back to the distant past — to survive on our planet. According to David Wolpert, a mathematician and physicist at the Santa Fe Institute who convened the recent workshop, and his colleague Artemy Kolchinsky, the key point is that well-adapted organisms are correlated with that environment. If a bacterium swims dependably toward the left or the right when there is a food source in that direction, it is better adapted, and will flourish more, than one  that swims in random directions and so only finds the food by chance. A correlation between the state of the organism and that of its environment implies that they share information in common. Wolpert and Kolchinsky say that it’s this information that helps the organism stay out of equilibrium — because, like Maxwell’s demon, it can then tailor its behavior to extract work from fluctuations in its surroundings. If it did not acquire this information, the organism would gradually revert to equilibrium: It would die.
 
Looked at this way, life can be considered as a computation that aims to optimize the storage and use of meaningful information. And life turns out to be extremely good at it. Landauer’s resolution of the conundrum of Maxwell’s demon set an absolute lower limit on the amount of energy a finite-memory computation requires: namely, the energetic cost of forgetting. The best computers today are far, far more wasteful of energy than that, typically consuming and dissipating more than a million times more. But according to Wolpert, “a very conservative estimate of the thermodynamic efficiency of the total computation done by a cell is that it is only 10 or so times more than the Landauer limit.”
 
The implication, he said, is that “natural selection has been hugely concerned with minimizing the thermodynamic cost of computation. It will do all it can to reduce the total amount of computation a cell must perform.” In other words, biology (possibly excepting ourselves) seems to take great care not to overthink the problem of survival. This issue of the costs and benefits of computing one’s way through life, he said, has been largely overlooked in biology so far.
 

I don't know if that's true, but it is so elegant as to be breathtaking. What this all leads to is a theory of a new form of evolution, different from the Darwinian definition.

Adaptation here has a more specific meaning than the usual Darwinian picture of an organism well-equipped for survival. One difficulty with the Darwinian view is that there’s no way of defining a well-adapted organism except in retrospect. The “fittest” are those that turned out to be better at survival and replication, but you can’t predict what fitness entails. Whales and plankton are well-adapted to marine life, but in ways that bear little obvious relation to one another.
 
England’s definition of “adaptation” is closer to Schrödinger’s, and indeed to Maxwell’s: A well-adapted entity can absorb energy efficiently from an unpredictable, fluctuating environment. It is like the person who keeps his footing on a pitching ship while others fall over because she’s better at adjusting to the fluctuations of the deck. Using the concepts and methods of statistical mechanics in a nonequilibrium setting, England and his colleagues argue that these well-adapted systems are the ones that absorb and dissipate the energy of the environment, generating entropy in the process.
 

I'm tempted to see an analogous definition of successful corporate adaptation in this, though I'm inherently skeptical of analogy and metaphor. Still, reading a paragraph like this, one can't think of how critical it is for companies to remember the right lessons from the past, and not too many of the wrong ones.

There’s a thermodynamic cost to storing information about the past that has no predictive value for the future, Still and colleagues show. To be maximally efficient, a system has to be selective. If it indiscriminately remembers everything that happened, it incurs a large energy cost. On the other hand, if it doesn’t bother storing any information about its environment at all, it will be constantly struggling to cope with the unexpected. “A thermodynamically optimal machine must balance memory against prediction by minimizing its nostalgia — the useless information about the past,’’ said a co-author, David Sivak, now at Simon Fraser University in Burnaby, British Columbia. In short, it must become good at harvesting meaningful information — that which is likely to be useful for future survival.
 

This theory even offers its own explanation for death.

It’s certainly not simply a matter of things wearing out. “Most of the soft material we are made of is renewed before it has the chance to age,” Meyer-Ortmanns said. But this renewal process isn’t perfect. The thermodynamics of information copying dictates that there must be a trade-off between precision and energy. An organism has a finite supply of energy, so errors necessarily accumulate over time. The organism then has to spend an increasingly large amount of energy to repair these errors. The renewal process eventually yields copies too flawed to function properly; death follows.
 
Empirical evidence seems to bear that out. It has been long known that cultured human cells seem able to replicate no more than 40 to 60 times (called the Hayflick limit) before they stop and become senescent. And recent observations of human longevity have suggested that there may be some fundamental reason why humans can’t survive much beyond age 100.
 

Again, it's tempting to look beyond humans and at corporations, and why companies die out over time. Is there a similar process at work, in which a company's replication of its knowledge introduces more and more errors over time until the company dies a natural death?

I suspect the mechanism at work in companies is different, but that's an article for another time. The parallel that does seem promising is the idea that successful new companies are more likely to emerge in fluctuating nonequilibrium environments, not from gentle and somewhat static ones.

Gonna make you sweat

If I were to tell you that there was an entire industry that overcharged the vast majority of its customers, but those customers were fully aware they were being robbed, and that was the only way to make the business viable, what would you guess?

If you’re a member of a gym, you will be aware that for the first month of the year the place is horribly packed out with sweaty and unfit people, all the classes are booked up and you can’t get on any of the machines you want. If your interaction with the keep-fit industry is more along the lines of walking past the gym on the way to the cake shop, you might be more aware of the equally curious fact that commercial gyms always seem to have a heavily advertised ‘special’ membership deal going on. Paying the full whack listed rate at a gym is actually a pretty difficult thing to do — much more so than paying full freight rack-rate for a hotel room — unless you do the single most expensive thing you can do in physical culture, and join the gym shortly after the Christmas holidays.

SWEATY BETTY Having seen the books of a gym chain or two, we can tell you that the ‘Sweaty January’ phenomenon is not an urban myth or a joke — it’s absolutely fundamental to the economics of the industry and it’s basically impossible to run an economically viable gym without taking it into account. Usually about 75 per cent of all gym memberships are taken out in the month of January. Not only this, but the economics of the industry absolutely depend on the fact that a very great proportion of January joiners will not visit more than three or four times in total before their membership comes to a floundering flop of weight not lost at the end of the year. The founder of Colman’s Mustard used to claim that his fortune was based on the bit of mustard that everyone left behind on their plate, but gym memberships have really pushed things to the limit when it comes to this model of making people pay for a lot more of the product than they have any likelihood of using.


On the bizarre economics of gyms. The spatial inefficiency of gyms is something I hadn't ever spent much time thinking about.

Human nature being as immutable as it is, most gyms are great investments (other than Bally Total Fitness, which reached too far, too fast). In fact, human nature is so predictable that a company like Planet Fitness can come along and offer memberships for just $10 a month and still not be overrun with people. It's found money.

If you're feeling particularly fitness motivated this month, maybe wait a month and see if the impulse passes along with the January prices.