10 browser tabs

1. Love in the Time of Robots

“Is it difficult to play with her?” the father asks. His daughter looks to him, then back at the android. Its mouth begins to open and close slightly, like a dying fish. He laughs. “Is she eating something?”
 
The girl does not respond. She is patient and obedient and listens closely. But something inside is telling her to resist. 
 
“Do you feel strange?” her father asks. Even he must admit that the robot is not entirely believable.
 
Eventually, after a few long minutes, the girl’s breathing grows heavier, and she announces, “I am so tired.” Then she bursts into tears.
 
That night, in a house in the suburbs, her father uploads the footage to his laptop for posterity. His name is Hiroshi Ishi­guro, and he believes this is the first record of a modern-day android.
 

Reads like the treatment for a science fiction film, some mashup of Frankenstein, Pygmalion, and Narcissus. One incredible moment after another, and I'll grab just a few excerpts, but the whole thing is worth reading.

But he now wants something more. Twice he has witnessed others have the opportunity, however confusing, to encounter their robot self, and he covets that experience. Besides, his daughter was too young, and the newscaster, though an adult, was, in his words, merely an “ordinary” person: Neither was able to analyze their android encounter like a trained scientist. A true researcher should have his own double. Flashing back to his previous life as a painter, Ishi­guro thinks: This will be another form of self-portrait. He gives the project his initials: Geminoid HI. His mechanical twin.
 

Warren Ellis, in a recent commencement speech delivered at the University of Essex, said:

Nobody predicted how weird it’s gotten out here.  And I’m a science fiction writer telling you that.  And the other science fiction writers feel the same.  I know some people who specialized in near-future science fiction who’ve just thrown their hands up and gone off to write stories about dragons because nobody can keep up with how quickly everything’s going insane.  It’s always going to feel like being thrown in the deep end, but it’s not always this deep, and I’m sorry for that.
 

The thing is, far future sci-fi is likely to be even more off base now given how humans are evolving in lock step with the technology around them. So we need more near future sci-fi, of a variety smarter than Black Mirror, to grapple with the implications.

Soon his students begin comparing him to the Geminoid—“Oh, professor, you are getting old,” they tease—and Ishi­guro finds little humor in it. A few years later, at 46, he has another cast of his face made, to reflect his aging, producing a second version of HI. But to repeat this process every few years would be costly and hard on his vanity. Instead, Ishi­guro embraces the logi­cal alternative: to alter his human form to match that of his copy. He opts for a range of cosmetic procedures—laser treatments and the injection of his own blood cells into his face. He also begins watching his diet and lifting weights; he loses about 20 pounds. “I decided not to get old anymore,” says Ishi­guro, whose English is excellent but syntactically imperfect. “Always I am getting younger.”
 
Remaining twinned with his creation has become a compulsion. “Android has my identity,” he says. “I need to be identical with my android, otherwise I’m going to lose my identity.” I think back to another photo of his first double’s construction: Its robot skull, exposed, is a sickly yellow plastic shell with openings for glassy teeth and eyeballs. When I ask what he was thinking as he watched this replica of his own head being assembled, Ishi­guro says, perhaps only half-joking, “I thought I might have this kind of skull if I removed my face.”
 
Now he points at me. “Why are you coming here? Because I have created my copy. The work is important; android is important. But you are not interested in myself.”
 

This should be some science fiction film, only I'm not sure who our great science fiction director is. The best examples may be too old to want to look upon such a story as anything other than grotesque and horrific.

2. Something is wrong on the internet by James Bridle

Of course, some of what's on the internet really is grotesque and horrific. 

Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level. 
 

Given how much my nieces love watching product unwrapping and Peppa the Pig videos on YouTube, this story was induced a sense of dread I haven't felt since the last good horror film I watched, which I can't remember anymore since the world has run a DDOS on my emotions.

We often think of a market operating at peak efficiency as sending information back and forth between supply and demand, allowing the creation of goods that satisfy both parties. In the tech industry, the wink-wink version of that is saying that pornography leads the market for any new technology, solving, as it does, the two problems the internet is said to solve better, at scale, than any medium before it: loneliness and boredom.

Bridle's piece, however, finds the dark cul-de-sacs and infected runaway processes which have branched out from the massive marketplace that is YouTube. I decided to follow a Peppa the Pig video on the service and started tapping on Related Videos, like I imagine one of my nieces doing, and quickly wandered into a dark alleyway where I saw some video which I would not want any of them watching. As Bridle did, I won't link to what I found; suffice to say it won't take you long to stumble on some of it if you want, or perhaps even if you don't.

What's particularly disturbing is the somewhat bizarre, inexplicably grotesque nature of some of these video remixes. David Cronenberg is known for his body horror films; these YouTube videos are like some perverse variant of that, playing with popular children's iconography.

Facebook and now Twitter are taking heat for disseminating fake news, and that is certainly a problem worth debating, but with that problem we're talking about adults. Children don't have the capacity to comprehend what they're seeing, and given my belief in the greater effect of sight, sound, and motion, I am even more disturbed by this phenomenon.

A system where it's free to host videos to a global audience, where this type of trademark infringement weaponizes brand signifiers with seeming impunity, married with increasingly scalable content production and remixes using technology, allows for the type of scalable problem we haven't seen before.

The internet has enabled all types of wonderful things at scale; we should not be surprised that it would foster the opposite. But we can, and should, be shocked.

3. FDA approves first blood sugar monitor without finger pricks

This is exciting. One view which seems to be common wisdom these days when it comes to health is that it's easier to lose weight and impact your health through diet than exercise. But one of the problems of the feedback loop in diet (and exercise, actually) is how slow it is. You sneak a few snacks here and there walking by the company cafeteria every day, and a month later you hop on the scale and emit a bloodcurdling scream as you realize you've gained 8 pounds.

A friend of mine had gestational diabetes during one of her pregnancies and got a home blood glucose monitor. You had to prick your finger and draw blood to get your blood glucose reading, but curious, I tried it before and after a BBQ.

To see what various foods did to my blood sugar in near real-time was a real eye-opener. Imagine in the future when one could see what a few french fries and gummy bears did to your blood sugar, or when the reading could be built into something like an Apple Watch, without having to draw blood each time. I don't mind the sight of blood, but I'd prefer not to turn my finger tips into war zones.

Faster feedback might transform dieting into something more akin to deliberate practice. Given that another popular theory of obesity is that it's an insulin phenomenon, tools like this, built for diabetes, might have much mass market impact.

4.  Ingestable ketones

Ingestable ketones have been a recent sort of holy grail for endurance athletes, and now HVMN is bringing one to market. Ketogenic diets are all the rage right now, but for an endurance athlete, the process of being able to fuel oneself on ketones has always sounded like a long and miserable process.

The body generates ketones from fat when low on carbs or from fasting. The theory is that endurance athletes using ketones rather than glycogen from carbs require less oxygen and thus can work out longer.

I first heard about the possibility of exogenous ketones for athletes from Peter Attia. As he said then, perhaps the hardest thing about ingesting exogenous ketones is the horrible taste, which caused him to gag and nearly vomit in his kitchen. It doesn't sound like the taste problem has been solved.

Until we get the pill that renders exercise obsolete, however, I'm curious to give this a try. If you decide to pre-order, you can use my referral code to get $15 off.

5. We Are Nowhere Close to the Limits of Athletic Performance

By comparison, the potential improvements achievable by doping effort are relatively modest. In weightlifting, for example, Mike Israetel, a professor of exercise science at Temple University, has estimated that doping increases weightlifting scores by about 5 to 10 percent. Compare that to the progression in world record bench press weights: 361 pounds in 1898, 363 pounds in 1916, 500 pounds in 1953, 600 pounds in 1967, 667 pounds in 1984, and 730 pounds in 2015. Doping is enough to win any given competition, but it does not stand up against the long-term trend of improving performance that is driven, in part, by genetic outliers. As the population base of weightlifting competitors has increased, outliers further and further out on the tail of the distribution have appeared, driving up world records.
 
Similarly, Lance Armstrong’s drug-fuelled victory of the 1999 Tour de France gave him a margin of victory over second-place finisher Alex Zulle of 7 minutes, 37 seconds, or about 0.1 percent.3 That pales in comparison to the dramatic secular increase in speeds the Tour has seen over the past half century: Eddy Merckx won the 1971 tour, which was about the same distance as the 1999 tour, in a time 5 percent worse than Zulle’s. Certainly, some of this improvement is due to training methods and better equipment. But much of it is simply due to the sport’s ability to find competitors of ever more exceptional natural ability, further and further out along the tail of what’s possible.
 

In the Olympics, to take the most celebrated athletic competition, victors are celebrated with videos showing them swimming laps, tossing logs in a Siberian tundra, running through a Kenyan desert. We celebrate the work, the training. Good genes are given narrative short shrift. Perhaps we should show a picture of their DNA, just to give credit where much credit is due?

If I live a normal human lifespan, I expect to live to see special sports leagues and divisions created for athletes who've undergone genetic modification in the future. It will be the return of the freak show at the circus, but this time for real. I've sat courtside and seen people like Lebron James, Giannis Antetokounmpo, Kevin Durant, and Joel Embiid walk by me. They are freaks, but genetic engineering might produce someone who stretch our definition of outlier.

In other words, it is highly unlikely that we have come anywhere close to maximum performance among all the 100 billion humans who have ever lived. (A completely random search process might require the production of something like a googol different individuals!)
 
But we should be able to accelerate this search greatly through engineering. After all, the agricultural breeding of animals like chickens and cows, which is a kind of directed selection, has easily produced animals that would have been one in a billion among the wild population. Selective breeding of corn plants for oil content of kernels has moved the population by 30 standard deviations in roughly just 100 generations.6 That feat is comparable to finding a maximal human type for a specific athletic event. But direct editing techniques like CRISPR could get us there even faster, producing Bolts beyond Bolt and Shaqs beyond Shaq.
 

6. Let's set half a percent as the standard for statistical significance

My many-times-over coauthor Dan Benjamin is the lead author on a very interesting short paper "Redefine Statistical Significance." He gathered luminaries from many disciplines to jointly advocate a tightening of the standards for using the words "statistically significant" to results that have less than a half a percent probability of occurring by chance when nothing is really there, rather than all results that—on their face—have less than a 5% probability of occurring by chance. Results with more than a 1/2% probability of occurring by chance could only be called "statistically suggestive" at most. 
 
In my view, this is a marvelous idea. It could (a) help enormously and (b) can really happen. It can really happen because it is at heart a linguistic rule. Even if rigorously enforced, it just means that editors would force people in papers to say "statistically suggestive for a p of a little less than .05, and only allow the phrase "statistically significant" in a paper if the p value is .005 or less. As a well-defined policy, it is nothing more than that. Everything else is general equilibrium effects.
 

Given the replication crisis has me doubting almost every piece of conventional wisdom I've inherited in my life, I'm okay with this.

7. We're surprisingly unaware of when our own beliefs change

If you read an article about a controversial issue, do you think you’d realise if it had changed your beliefs? No one knows your own mind like you do – it seems obvious that you would know if your beliefs had shifted. And yet a new paper in The Quarterly Journal of Experimental Psychology suggests that we actually have very poor “metacognitive awareness” of our own belief change, meaning that we will tend to underestimate how much we’ve been swayed by a convincing article.
 
The researchers Michael Wolfe and Todd Williams at Grand Valley State University said their findings could have implications for the public communication of science. “People may be less willing to meaningfully consider belief inconsistent material if they feel that their beliefs are unlikely to change as a consequence,” they wrote.
 

Beyond being an interesting result, I link to this as an example of a human readable summary of a research paper. This his how this article summarize the research study and its results:

The researchers recruited over two hundred undergrads across two studies and focused on their beliefs about whether the spanking/smacking of kids is an effective form of discipline. The researchers chose this topic deliberately in the hope the students would be mostly unaware of the relevant research literature, and that they would express a varied range of relatively uncommitted initial beliefs.
 
The students reported their initial beliefs about whether spanking is an effective way to discipline a child on a scale from “1” completely disbelieve to “9” completely believe. Several weeks later they were given one of two research-based texts to read: each was several pages long and either presented the arguments and data in favour of spanking or against spanking. After this, the students answered some questions to test their comprehension and memory of the text (these measures varied across the two studies). Then the students again scored their belief in whether spanking is effective or not (using the same 9-point scale as before). Finally, the researchers asked them to recall what their belief had been at the start of the study.
 
The students’ belief about spanking changed when they read a text that argued against their own initial position. Crucially, their memory of their initial belief was shifted in the direction of their new belief – in fact, their memory was closer to their current belief than their original belief. The more their belief had changed, the larger this memory bias tended to be, suggesting the students were relying on their current belief to deduce their initial belief. The memory bias was unrelated to the measures of how well they’d understood or recalled the text, suggesting these factors didn’t play a role in memory of initial belief or awareness of belief change.
 

Compare this link above to the abstract of the paper itself:

When people change beliefs as a result of reading a text, are they aware of these changes? This question was examined for beliefs about spanking as an effective means of discipline. In two experiments, subjects reported beliefs about spanking effectiveness during a prescreening session. In a subsequent experimental session, subjects read a one-sided text that advocated a belief consistent or inconsistent position on the topic. After reading, subjects reported their current beliefs and attempted to recollect their initial beliefs. Subjects reading a belief inconsistent text were more likely to change their beliefs than those who read a belief consistent text. Recollections of initial beliefs tended to be biased in the direction of subjects’ current beliefs. In addition, the relationship between the belief consistency of the text read and accuracy of belief recollections was mediated by belief change. This belief memory bias was independent of on-line text processing and comprehension measures, and indicates poor metacognitive awareness of belief change.
 

That's actually one of the better research abstracts you'll read and still it reflects the general opacity of the average research abstract. I'd argue that some of the most important knowledge in the world is locked behind abstruse abstracts.

Why do researchers write this way? Most tell me that researchers write for other researchers, and incomprehensible prose like this impresses their peers. What a tragedy. As my longtime readers know, I'm a firm believer in the power of the form of a message. We continue to underrate that in all aspects of life, from the corporate world to our personal lives, and here, in academia.

Then again, such poor writing keeps people like Malcolm Gladwell busy transforming such insight into breezy reads in The New Yorker and his bestselling books.

8. Social disappointment explains chimpanzees' behaviour in the inequity aversion task

As an example of the above phenomenon, this paper contains an interesting conclusion, but try to parse this abstract:

Chimpanzees’ refusal of less-preferred food when an experimenter has previously provided preferred food to a conspecific has been taken as evidence for a sense of fairness. Here, we present a novel hypothesis—the social disappointment hypothesis—according to which food refusals express chimpanzees' disappointment in the human experimenter for not rewarding them as well as they could have. We tested this hypothesis using a two-by-two design in which food was either distributed by an experimenter or a machine and with a partner present or absent. We found that chimpanzees were more likely to reject food when it was distributed by an experimenter rather than by a machine and that they were not more likely to do so when a partner was present. These results suggest that chimpanzees’ refusal of less-preferred food stems from social disappointment in the experimenter and not from a sense of fairness.
 

Your average grade school English teacher would slap a failing grade on this butchery of the English language.

9. Metacompetition: Competing Over the Game to be Played

When CDMA-based technologies took off in the US, companies like QualComm that work on that standard prospered; metacompetitions between standards decide the fates of the firms that adopt (or reject) those standards.

When an oil spill raises concerns about the environment, consumers favor businesses with good environmental records; metacompetitions between beliefs determine the criteria we use to evaluate whether a firm is “good.”

If a particular organic foods certification becomes important to consumers, companies with that certification are favored; metacompetitions between certifications determines how the quality of firms is measured.
 
In all these examples, you could be the very best at what you do, but lose in the metacompetition over what criteria will matter. On the other hand, you may win due to a metacompetition that protects you from fierce rivals who play a different game.
 
Great leaders pay attention to metacompetition. They advocate the game they play well, promoting criteria on which they measure up. By contrast, many failed leaders work hard at being the best at what they do, only to throw up their hands in dismay when they are not even allowed to compete. These losers cannot understand why they lost, but they have neglected a fundamental responsibility of leadership. It is not enough to play your game well. In every market in every country, alternative “logics” vie for prominence. Before you can win in competition, you must first win the metacompetition over the game being played.
 

In sports negotiations between owners and players, the owners almost always win the metacompetition game. In the writer's strike in Hollywood in 2007, the writer's guild didn't realize they were losing the metacompetition and thus ended up worse off than before. Amazon surpassed eBay by winning the retail metacompetition (most consumers prefer paying a good, fixed price for a good of some predefined quality than dealing with the multiple axes of complexity of an auction) after first failing at tackling eBay on its direct turf of auctions.

Winning the metacompetition means first being aware of what it is. It's not so easy in a space like, say, social networking, where even some of the winners don't understand what game they're playing.

10. How to be a Stoic

Much of Epictetus’ advice is about not getting angry at slaves. At first, I thought I could skip those parts. But I soon realized that I had the same self-recriminatory and illogical thoughts in my interactions with small-business owners and service professionals. When a cabdriver lied about a route, or a shopkeeper shortchanged me, I felt that it was my fault, for speaking Turkish with an accent, or for being part of an élite. And, if I pretended not to notice these slights, wasn’t I proving that I really was a disengaged, privileged oppressor? Epictetus shook me from these thoughts with this simple exercise: “Starting with things of little value—a bit of spilled oil, a little stolen wine—repeat to yourself: ‘For such a small price, I buy tranquillity.’ ”
 
Born nearly two thousand years before Darwin and Freud, Epictetus seems to have anticipated a way out of their prisons. The sense of doom and delight that is programmed into the human body? It can be overridden by the mind. The eternal war between subconscious desires and the demands of civilization? It can be won. In the nineteen-fifties, the American psychotherapist Albert Ellis came up with an early form of cognitive-behavioral therapy, based largely on Epictetus’ claim that “it is not events that disturb people, it is their judgments concerning them.” If you practice Stoic philosophy long enough, Epictetus says, you stop being mistaken about what’s good even in your dreams.
 

The trendiness of stoicism has been around for quite some time now. I found this tab left over from 2016, and I'm sure Tim Ferriss was espousing it long before then, and not to mention the enduring trend that is Buddhism. That meditation and stoicism are so popular in Silicon Valley may be a measure of the complacency of the region; these seem direct antidotes to the most first world of problems. People everywhere complain of the stresses on their mind from the deluge of information they receive for free from apps on the smartphone with processing power that would put previous supercomputers to shame.

Still, given that stoicism was in vogue in Roman times, it seems to have stood the test of time. Since social media seems to have increased the surface area of our social fabric and our exposure to said fabric, perhaps we could all use a bit more stoicism in our lives. I suspect one reason Curb Your Enthusiasm curdles in the mouth more than before is not just that his rich white man's complaints seem particularly ill timed in the current environment but that he is out of touch with the real nature of most people's psychological stressors now. A guy of his age and wealth probably doesn't spend much time on social media, but if he did, he might realize his grievances no longer match those of the average person in either pettiness or peculiarity.

Show don't tell

I suspect we do a better job teaching children than adults, and much of that has to do with trying harder to explain things visually, in the most intuitive, simple way possible, to children. As we grow older, we start stacking on level after level of abstraction, losing more and more students along the way.

Even language is an abstraction, and while I enjoy writing, the ratio that a picture is worth a thousand words is a cliche that describes a very real ratio. As someone I chatted with noted this week, we have an actual way of quantifying the relative value of video versus images versus words: the CPMs that advertisers are willing to pay for video ads versus display ads versus text ads. My early years at Hulu, it was unbelievable how high and rock solid our video ad rates were compared to other ad formats on the market. All the recent pivots to video are surprising only for how late they're coming for many; trying to run a business off of text and display ad revenues is life with poverty unit economics.

This is not to say video is always better. As a format, it's harder for many to master, and like many, I often roll my eyes when sent a link to a video without a transcript. It's not because I don't believe video is a more accessible, democratic, and moving medium. It's just that a lot of instructional video would be just as information rich and more quickly scanned for its key messages if transcribed into text. Many a media site will struggle with pivoting to video unless they understand the format at the same level they do text and photos.

Video at its best is much more than a camera pointed at a person speaking. Now, granted, some speakers are immensely gifted orators, and so a TED talk may have more impact when watched rather than read. However, the average MOOC video, to take one example, is dull beyond words.

Video as a medium still has enormous potential, especially for education. In the trough of disillusion for MOOCs, I expect we'll see something rise from the ashes that finally unlocks video's potential as a communications medium. We've done a solid job with that format as a narrative storytelling device, and that's partially because the revenue in Hollywood supports an immense talent development infrastructure. Education might be able to provide that level of financial incentive if global distribution through the internet allows for aggregation of larger scale audiences.

One of the core challenges of education, as with disciplines like fitness and diet, is motivation. That is another area where video shines. David Foster Wallace warned of the addictive nature of video in Infinite Jest, and the fact that the average American still watches something like four to five hours of TV a day, despite the wealth of alternatives in this age, is an astonishing testament to the enduring pull of filmed entertainment.

As with anything, the seductive nature of moving images is merely a tool, inheriting its positive or negative valence from its uses. When it comes to teaching abstract concepts, I prefer good visuals over clear text almost every time if given the choice. Our brains are just wired for visual input in a way they aren't for abstractions like language, which explains many phenomenon, like why memory champions translate numbers and alphabet characters into images, and why they remember long sequences like digits of pi by placing such images into memory palaces, essentially visual hard drives.

One could try to explain the principles of potential and kinetic energy, for example, with a series of mathematical formulas, in a long essay. Or one could watch the following video.

Gracias al proyecto Zubideak, de San Sebastián 2016, Mathieu Bleton, bailarín de la compañía Yoann Bourgeois de Grenoble, acerca la Biennale de la Danse de Lyon a San Sebastián.

Here's the video of the full routine by Joann Bourgeois, performed in San Sebastian. Just gorgeous.

les fugues, festival Échappée Belle 2014 à Blanquefort (33)

This is what I wish Cirque du Soleil would be every time someone drags me to one of their shows.

1 personal update and 10 browser tabs

I haven't spent much time on personal updates here the past several years, but it matters on some topics to know what my personal affiliations are, so I wanted to share that I've left Oculus as of mid July. I'm not sure what's next yet, but I've enjoyed having some time to travel to see friends and family, catch up on reading, and get outside on my bike the past few weeks.

It's also been great to have the chance to connect with some of the smart people of the Bay Area, many of whom I've never met before except online. Bouncing ideas around with some of the interesting thinkers here is something I wish I did more of sooner, and I'm trying to make up for lost time. It has certainly helped me to refine my thinking about many topics, including my next venture, whatever that turns out to be.

Please ping me if you'd like to grab a coffee.

***

One of my goals during this break is to clear out a lot of things I've accumulated over the years. I donated six massive boxes of books to the local library the other week, and I've been running to a Goodwill dropoff center every few days. 

The other cruft I've accumulated is of a digital nature, mostly an embarrassing number of browser tabs, some of which have been open since before an orange president became the new black. Such digital cruft is no less a mental burden than its physical counterparts, so I'm going to start to zap them, ten at a time. I have to; my Macbook Pro can no longer handle the sheer volume of tabs open, the fan is always on full throttle like a jet engine.

Here are the first ten to go.

1. The War Against Chinese Restaurants

Startlingly, however, there was once a national movement to eliminate Chinese restaurants, using innovative legal methods to drive them out. Chinese restaurants were objectionable for two reasons. First, they threatened white women, who were subject to seduction by Chinese men, through intrinsic female weakness, or employment of nefarious techniques such as opium addiction. In addition, Chinese restaurants competed with “American” restaurants, thus threatening the livelihoods of white owners, cooks and servers; unions were the driving force behind the movement. 


The effort was creative; Chicago used anti-Chinese zoning, Los Angeles restricted restaurant jobs to citizens, Boston authorities decreed Chinese restaurants would be denied licenses, the New York Police Department simply ordered whites out of Chinatown. Perhaps the most interesting technique was a law, endorsed by the American Federation of Labor for adoption in all jurisdictions, prohibiting white women from working in Asian restaurants. Most measures failed or were struck down. However, Asians still lost; the unions did not eliminate Chinese restaurants, but they achieved their more important goal, extending the federal policy of racial exclusion in immigration from Chinese to all Asians. The campaign is of more than historical interest. As current anti-immigration sentiments and efforts show, even today the idea that white Americans should have a privileged place in the economy, or that non-whites are culturally incongruous, persists among some.
 

The core of the story of America is its deep seated struggle with race, not surprising for a country founded on the ideal of the equality of all even as it could not live up to that itself, in its founding moment. That continual grasping at resolving that paradox and hypocrisy is at the heart of what makes the U.S. the most fascinating social experiment in the world, and one reason I struggle to imagine living elsewhere right now.

2. Two related pieces: The revolt of the public and the “age of post-truth” and In Defense of Hierarchy 

From the former:

A complex society can’t dispense with elites.  That is the hard reality of our condition, and it involves much more than a demand for scarce technical skills.  In all human history, across continents and cultures, the way to get things done has been command and control within a formal hierarchy.  The pyramid can be made flatter or steeper, and an informal network is invariably overlaid on it:  but the structural necessity holds.  Only a tiny minority can be bishops of the church.  This may seem trivially apparent when it comes to running a government or managing a corporation, but it applies with equal strength to the dispensation of truth.
 
So here is the heart of the matter.  The sociopolitical disorders that torment our moment in history, including the fragmentation of truth into “post-truth,” flow primarily from a failure of legitimacy, of the bond of trust between rulers and ruled.  Everything begins with the public’s conviction that elites have lost their authorizing magic.  Those at the top have forsaken their function yet cling, illicitly, to their privileged perches.  Only in this context do we come to questions of equality or democracy.
 
If my analysis is correct, the re-formation of the system, and the recovery of truth, must depend on the emergence of a legitimate elite class.
 

From the latter:

To protect against abuse by those with higher status, hierarchies should also be domain-specific: hierarchies become problematic when they become generalised, so that people who have power, authority or respect in one domain command it in others too. Most obviously, we see this when holders of political power wield disproportionate legal power, being if not completely above the law then at least subject to less legal accountability than ordinary citizens. Hence, we need to guard against what we might call hierarchical drift: the extension of power from a specific, legitimate domain to other, illegitimate ones. 
 
This hierarchical drift occurs not only in politics, but in other complex human arenas. It’s tempting to think that the best people to make decisions are experts. But the complexity of most real-world problems means that this would often be a mistake. With complicated issues, general-purpose competences such as open-mindedness and, especially, reasonableness are essential for successful deliberation.
 
Expertise can actually get in the way of these competences. Because there is a trade-off between width and depth of expertise, the greater the expert, the narrower the area of competence. Hence the best role for experts is often not as decision-makers, but as external resources to be consulted by a panel of non-specialist generalists selected for general-purpose competences. These generalists should interrogate the experts and integrate their answers from a range of specialised aspects into a coherent decision. So, for example, parole boards cannot defer to one type of expert but must draw on the expertise of psychologists, social workers, prison guards, those who know the community into which a specific prisoner might be released, and so on. This is a kind of collective, democratic decision-making that makes use of hierarchies of expertise without slavishly deferring to them.  
 

What would constitute a new legitimate elite class? It's a mystery, and a grave one. When truth is largely socially and politically constructed, it weighs nothing. The whole psychology replication crisis couldn't have hit at a worse time. With the internet, you can Google and find a study to back up just about any of your views, yet it's not clear which of the studies are actually sound.

At the same time, we can't all be expected to be experts on everything, even if, with the internet, everyone pretends to be.

3. Why Men Don't Live As Long As Women

Evidence points at testosterone, which is useful for mating but costly in many other ways. I maintain there is nothing more frightening in the world than a bunch of single young men full of testosterone.

This does not mean, however, that men cannot evolve other reproductive strategies. Despite their propensity to engage in risky behavior and exhibit expensive, life-shortening physical traits, men have evolved an alternative form of reproductive effort in the form of paternal investment—something very rare in primates (and mammals in general). For paternal investment to evolve, males have to make sure they are around to take care of their offspring. Risky behavior and expensive tissue have to take a backseat to investment that reflects better health and perhaps prolongs lifespan. Indeed, men can exhibit declines in testosterone and put on a bit of weight when they become fathers and engage in paternal care.10, 11 Perhaps, then, fatherhood is good for health.
 

Perhaps we should be extolling the virtuous signal that is dadbod to a much greater degree than we have. And, on the flipside, we should look with a skeptical eye on fathers with chiseled abs. How does one get a six pack from attending imaginary tea parties with one's daughter for hours on end?

4. Increasing consumer well-being: risk as potential driver of happiness

We show that, even if, ex ante, consumers fear high risk and do not associate it to a high level of happiness, their ex post evaluation of well-being is generally higher when identical consequences result from a high-risk situation than from a low-risk situation. Control over risk-taking reinforces the gap between ex ante and ex post measures of happiness. Thus, our article provides empirical evidence about a positive relation between risk and individual well-being, suggesting that risky experiences have the potential to increase consumer well-being.
 

While I'm not certain what I'm going to do next, I would like to increase my risk profile. It seems a shame not to when I'm fortunate enough to live with very little downside risk.

5. “When the student is ready the teacher will appear. When the student is truly ready, the teacher will disappear.”  —  Lao Tzu

There is some debate over the provenance of this quote but I have rarely read the second sentence, the first part is the one which has had the more enduring life, and for good reason. It's a rhetorical gem.

The second half is underrated. The best coaches know when stepping aside and pushing the student to new challenges is the only path to greater heights. Rather than becoming some Girardian rival like Bill Murray in Rushmore, the best teachers disappear. In the case of Yoda in Return of the Jedi, he literally disappears, though not until giving Luke his next homework assignment: to face Darth Vader.

6. The third wave of globalisation may be the hardest

First we enabled the movement of goods across borders. Then the internet unleashed the movement of ideas. Free movement of people, though? Recent nationalist backlashes aren't a promising sign. Maybe it will happen in its fullest online, maybe in virtual reality.

I am pro-immigration; my life is in so many ways the result of my parents coming to America in college. For decades, the United States has had essentially first pick of the world's hungriest, most talented dreamers, like a sports team that gets to pick at the top of the draft year after year despite winning the championship the year before. Trust the process, as Sam Hinkie might say.

On the other hand, taking off my American goggles, the diversity in the world's cultures, political and social systems, and ideologies is a source of global health. It feels like everyone should be encouraged (and supported) to spend a year abroad before, during, or after college, prior to entering the world, just to understand just how much socially acquired knowledge is path dependent and essentially arbitrary. 

7. Tyler Cowen's Reddit AMA

What is the most underrated city in the US? In the world?
TylerCowen
Los Angeles is my favorite city in the whole world, just love driving around it, seeing the scenery, eating there. I still miss living in the area.

I don't know if I have a favorite city in the world, but I'd agree Los Angeles is the most underrated city in the U.S. considering how many people spit on its very mention. Best dining destination of the major U.S. cities.

8. What are the hardest and easiest languages to learn?

Language Log offers a concise scale as a shorthand answer.

EASY
1. Mandarin (spoken)
2. Nepali
3. Russian
4. Japanese
5. Sanskrit
6. Chinese (written)
HARD
 

9. Under mentioned side effect of global warming

Time was, the cold and remoteness of the far north kept its freezer door closed to a lot of contagion. Now the north is neither so cold nor so remote. About four million people live in the circumpolar north, sometimes in sizable cities (Murmansk and ­Norilsk, Russia; Tromso, Norway). Oil rigs drill. Tourist ships cruise the Northwest Passage. And as new animals and pathogens arrive and thrive in the warmer, more crowded north, some human sickness is on the rise, too. Sweden saw a record number of tick-borne encephalitis cases in 2011, and again in 2012, as roe deer expanded their range northward with ticks in tow. Researchers think the virus the ticks carry may increase its concentrations in warmer weather. The bacterium Francisella tularensis, which at its worst is so lethal that both the U.S. and the USSR weaponized it during the Cold War, is also on the increase in Sweden. Spread by mosquitoes there, the milder form can cause months of flu-like symptoms. Last summer in Russia’s far north, anthrax reportedly killed a grandmother and a boy after melting permafrost released spores from epidemic-killed deer that had been buried for decades in the once frozen ground.
 

Because we don't already have enough sobering news in the world.

10. Why do our musical tastes ossify in our twenties?

It’s simply not realistic to expect someone to respond to music with such life-defining fervour more than once. And it’s not realistic, either, to expect someone comfortable with his personality to be flailing about for new sensibilities to adopt. I’ve always been somewhat suspicious of those who truly do, as the overused phrase has it, listen to everything. Such schizophrenic tastes seem not so much a symptom of well-roundedness as of an unstable sense of self. Liking everything means loving nothing. If you’re so quick to adopt new sentiments and their expression, then how serious were you about the ones you pushed aside to accommodate them?
 
Oh yeah, and one more thing: music today fucking sucks.
 

I still pursue new music, despite being past my twenties, driven mostly, I suspect, by a hunger for novelty that still seems to be kicking. At some point, I can't really recall when, the signaling function of my musical tastes lost most of its value. Once most of your friends have kids, you can seem cultured merely by having seen a movie that's released in the last year.

Compress to impress

One of the funniest and most implausible things in movies is the grand speech by the general, usually the film's protagonist, in front of thousands of soldiers in the moments just before a critical battle. Examples abound, and the punch lines lodge in the memory, from Henry V ("We band of brothers") to Braveheart ("They will never take away...our freedom!") to Lord of the Rings: Return of the King ("There may come a day...but today is not that day!"). 

[Does this actually happen in real life? Did generals ride back and forth before the start of battles in the Civil War and give motivational speeches? I'm genuinely curious.]

The reason these scenes always strike me as absurd is that the character giving the speech is never using a megaphone or a microphone. The speech is almost always given outdoors, in the open air, so his voice carries for a radius of, what, thirty or forty feet? I imagine a soldier standing in the last row of the army about a mile away from the front lines bugging everyone around him, "What did he say? Can anyone hear?" and being shushed by everyone. Maybe only the first row or two of soldiers needs to hear the motivational speech because they're the first to run into a hail of bullets and arrows?

Even with modern communication infrastructure, however, any modern CEO deals with amplification and distortion issues with any message. Humans learn about this problem very early on by playing telephone or operator, or what I just learned is more canonically known outside the U.S. as Chinese whispers. One person whispers a message in another person's ear, and it's passed on down the line to see if the original phrase can survive intact to the last person in the chain. Generally, errors accumulate along the way and what makes it to the end is some shockingly defective copy of the original.

Despite learning this lesson early on, most people in leadership positions still underestimate just how pervasive this problem is. This is why any manager or executive is familiar with how much time they spend on communicating the same things to different groups in the organization. It feels like it's all you do sometimes, and yet you still encounter people who feel like they're in the dark.

I hadn't read Jeff Bezos' most recent letter to shareholders until today, but it was just what I'd expect of it given something I observed in my seven years there, which are now more than a decade in the rear view mirror. In fact, one of reasons I hadn't read it yet was that I suspected it would be very familiar, and it was. The other thing I suspected was that it would be really concise and memorable, and again, it was.

I suspect that very early on in his career as CEO, Jeff noticed the Chinese whispers problem as the company scaled. Anyone who is lucky enough to lead a successful company very quickly senses the impossibility of scaling one's own time to all corners of the organization, but Jeff was laser focused on the more serious problem that presented, that of maintaining consistent strategy in all important decisions, many of which were made outside his purview each day. At scale, maintaining strategic alignment feels like an organizational design problem, but much of the impact of organizational design is centered around how it impacts information flow.

This problem is made more vexing by not just the telephone game issue but by the human inability to carry around a whole lot of directives in their minds. Jeff could spend a ton of time in All Hands meetings or with his direct reports and other groups inside Amazon, explaining his thinking in excruciating detail and hoping it sank in, but then he'd never have any time to do anything else.

Thankfully, humans have developed ways to ensure the integrity of messages persists across time when transmitted through the lossy mediums of oral tradition and hierarchical organizations.

One of these is to encode you message in a very distinctive format. There are many rhetorical tricks that have stood the test of time, like alliteration or anadiplosis. Perhaps supreme among these rhetorical forms is verse, especially when it rhymes. Both the rhythm and the rhyme (alliteration intentional) allow humans to compress and recall a message with greater accuracy than prose.

Fe fi fo fum, I smell the blood of an Englishman.
 

It's thought that bards of old could recite epics like Homer's Odyssey entirely from memory because the stories were in verse form (and through the use of memorization tricks like memory palaces and visual encoding). I don't know many people who can recite any novels from memory, but I've occasionally run across someone who can recite a long poem by heart. That's the power of verse.

It might be impossible to recite The Great Gatsby by memory regardless of what heuristics you employed, but it would certainly be easier if it were written by Dr. Seuss.

I do not like them,
Sam-I-am.
I do not like
Green eggs and ham.
 

I never chatted with Bezos about this, so I don't know if it was an explicit strategy on his part, but one of his great strengths as a communicator was the ability to encode the most important strategies for Amazon in very concise and memorable forms.

Take one example "Day 1." I don't know when he first said this to the company, but it was repeated endlessly all my years at Amazon. It's still Day 1. Jeff has even named one of the Amazon buildings Day 1. In fact, I bet most of my readers know what Day 1 means, and Jeff doesn't even bother explaining what Day 1 is at the start of his letter to shareholders, so familiar is it to all followers of the company. Instead, he just jumps straight into talking about how to fend off Day 2, which he doesn't even need to define because we all can probably infer it from the structure of his formulation, but he does so anyway.

Day 2 is stasis. Followed by irrelevance. Followed by excruciating, painful decline. Followed by death. And that is why it is always Day 1.
 

An entire philosophy, packed with ideas, compressed into two words. Day 1.

He then jumps into some of the strategies to fend off Day 2. The first is also familiar to everyone at Amazon, and many outside Amazon: customer obsession. Plenty of companies say they are customer focused, but Jeff articulates why he chose it from among the many possibilities he could have chosen for the company, giving it a level of oppositional definition that would otherwise be lacking.

There are many ways to center a business. You can be competitor focused, you can be product focused, you can be technology focused, you can be business model focused, and there are more. But in my view, obsessive customer focus is by far the most protective of Day 1 vitality.
 
Why? There are many advantages to a customer-centric approach, but here’s the big one: customers are always beautifully, wonderfully dissatisfied, even when they report being happy and business is great. Even when they don’t yet know it, customers want something better, and your desire to delight customers will drive you to invent on their behalf. No customer ever asked Amazon to create the Prime membership program, but it sure turns out they wanted it, and I could give you many such examples.
 

The second strategy to ward off stagnation is a newer codification (at least to me) of a principle he hammered home in other ways when I was there: resist proxies. 

As companies get larger and more complex, there’s a tendency to manage to proxies. This comes in many shapes and sizes, and it’s dangerous, subtle, and very Day 2.
 
A common example is process as proxy. Good process serves you so you can serve customers. But if you’re not watchful, the process can become the thing. This can happen very easily in large organizations. The process becomes the proxy for the result you want. You stop looking at outcomes and just make sure you’re doing the process right. Gulp. It’s not that rare to hear a junior leader defend a bad outcome with something like, “Well, we followed the process.” A more experienced leader will use it as an opportunity to investigate and improve the process. The process is not the thing. It’s always worth asking, do we own the process or does the process own us? In a Day 2 company, you might find it’s the second.
 

There are many ways one could have named this principle, but this one is just novel and pithy enough to be distinctive, and from now on I'll likely refer to this principle as he formulated it: resist proxies.

The next principle is the one that needs the most work: embrace external trends. Doesn't really roll off the tongue, or lodge in the memory. This is also universal enough an idea that someone has likely already come up with some exceptional aphorism, some of you may have one on the tip of your tongue. It maybe that it's just too generic to be worth the effort to stake a claim to with a unique turn of phrase.

The last principle I also remember from my Amazon days: high-velocity decision making (inside of it is another popular business aphorism: disagree and commit). This could be named "ready fire aim" or "if you don't commit, you've basically quit" or "if you don't really know, just pick and go" or something like that, but "high-velocity" is distinctive in its own sense. It's an adjective that sounds more at home in physics or in describing some sort of ammunition than it does in a corporate environment, and that helps an otherwise simple principle stand out.

Go back even further, and there are dozens of examples of Bezos codifying key ideas for maximum recall. For example, every year I was at Amazon had a theme (reminiscent of how David Foster Wallace imagined in Infinite Jest that in the future corporate sponsors could buy the rights to name years). These themes were concise and memorable ways to help everyone remember the most important goal of the company that year.

One year, when our primary goal was to grow our revenue and order volume as quickly as possible to achieve the economies of scale that would capitalize on our high fixed cost infrastructure investments and put wind into our flywheel, the theme was "Get Big Fast Baby." You can argue whether the "baby" at the end was necessary, but I think it's more memorable with it than without. Much easier to remember than "Grow revenues 80%" or "achieve economies of scale" or something like that.

Another time, as we looked out and saw the $1B revenue milestone approaching, one of Jeff's chief concerns was whether our company's processes could scale to handle that volume of orders without breaking (I'll write another time about the $1B revenue scaling phenomenon). To head off any such stumbles, we set aside an entire year at the company for GOHIO. It stood for "Getting our house in order".

As the first analyst in the strategic planning group, I produced an order volume projection for $1B in revenue and also generated forecasts for other metrics of relevance for every group in the company. For example, the customer service department would have to handle a higher volume of customer contacts, and the website would have to handle a greater traffic load.

Every group had that year of GOHIO to figure out how to scale to handle that volume without just linearly scaling its headcount and/or spending. If every group were just growing their headcount and costs linearly with order volume, our business model wouldn't work. The exercise was intended to find those processes that would break at such theoretical load and begin the work of finding where the economies of scale lay. An example was building customer self-service mechanisms to offload the most common customer service inquiries like printing return labels.

I could continue on through the years, but what stands out is that I can recite these from memory even now, over a decade later, and so could probably everyone who worked at Amazon those years.

Here's a good test of how strategically aligned a company is. Walk up to anyone in the company in the hallway and ask them if they know what their current top priority or mission is. Can they recite it from memory?

What Jeff understood was the power of rhetoric. Time spent coming up with the right words to package a key concept in a memorable way was time well spent. People fret about what others say about them when they're not in the room, but Jeff was solving the issue of getting people to say what he'd say when he wasn't in the room.

It was so important to him that we even had company-wide contests to come up with the most memorable ways to name our annual themes. One year Jeff announced at an All Hands meeting that someone I knew, Barnaby Dorfman, had won the contest. Jeff said the prize was that he'd buy something off the winner's Amazon wish list, but after pulling Barnaby's wish list up in front of the whole company on the screen, he said he didn't think any of the items was good enough so instead he went over to the product page for image stabilized binoculars from Canon, retailing for over $1000, and bought those instead.

I have a list of dozens of Jeff sayings filed away in memory, and I'm not alone. It's one reason he's one of the world's most effective CEO's. What's particularly impressive is that Jeff is so brilliant that it would be easy for him to state his thinking in complex ways that us mere mortals wouldn't grok. But true genius is stating the complex simply.

Ironically, Jeff employs the reverse of this for his own information inflows. It's well known that he banned Powerpoint at Amazon because he was increasingly frustrated at the lossy nature that medium. As Edward Tufte has long railed against, Powerpoint encourage people to reduce their thinking to a series of bullet points. Whenever someone would stand up in front of Jeff to present, Jeff would have rifled through to the end of the presentation before they would've finished a handful of slides, and Jeff would just jump in and start asking questions about slide 35 when someone was still talking to slide 3.

As a hyper intelligent person, Jeff didn't want lossy compression or lazy thinking, he wanted the raw feed in a structured form, and so we all shifted to writing our arguments out as essays that he'd read silently in meetings. Written language is a lossy format, too, but it has the advantage of being less forgiving of broken logic flows than slide decks.

To summarize, Jeff's outbound feedback was carefully encoded and compressed for maximum fidelity of transmission across hundreds of thousands of employees all over the world, but his inbound data feed was raw and minimally compressed. In structure, this pattern resembles what a great designer or photographer does. Find the most elegant and stable output from a complex universe of inputs.

One of the great advantages of identifying and codifying first principles is how little maintenance they need. Write once, remember forever. As testament to that, ever year, Bezos ends his Letter to Shareholders the same way.

As always, I attach a copy of our original 1997 letter. It remains Day 1.
 

It's his annual mic drop. Shareholders must feel so secure with their Amazon shares. Bezos is basically saying he figured out some enduring principles when he started his company, and they're so universal and stable that he doesn't have much to add some twenty years later except to point people back at his first letter to shareholders.

Other CEO's and leaders I've encountered are gifted at this as well ("Lean in" "Yes we can" "Move fast and break things" "Innovation is saying no to a thousand things" "Just do it" "I have a dream") but I gravitate to those from Jeff because I saw them arise from distinct needs in the moment, and not just for notoriety's sake. As such, it's a strategy applicable to more than just philosophers and CEO's. [Sometime I'll write about some of the communication strategies of Steve Jobs, many of which can be gleaned from his public keynotes. He was an extremely skilled and canny communicator, and in many ways an underrated one.]

Tyler Cowen named his latest book The Complacent Class. It's a really thought-provoking read, but the alliteration in the title helps. Now economists everywhere are referring to a broad set of phenomena by the term "complacent class." It wouldn't be nearly as memorable if called Complacent People or The Dangers of Self-Satisfaction. Can you name the subtitle of the book? It's "the self-defeating quest for the American Dream" but no one remembers that part.

Venkatesh Rao once wrote a memorable post about management principles encoded in the American version of the TV show The Office. Anyone familiar with the post probably remembers it by the first part of its title: "The Gervais Principle." Very few, I'd suspect, remember the rest of the title—"Or The Office according to The Office"—though it does employ a clever bit of word repetition.

Whatever you think of Hillary Clinton as compared to Donald Trump as Presidential candidates, I'd venture that more people can recite Trump's mantra—Make America Great Again—than Clinton's. I don't know if she had a slogan, or if she did I don't remember what it was. Her most memorable turn of phrase from the campaign trail was probably "then deal me in" at the end of a much longer phase, “if fighting for women's healthcare and paid family leave and equal pay is playing the woman card, then deal me in." It's difficult to think of a phrase more emblematic of her problems in articulating what she stood for. The first part of the sentence is long and wonky, and I couldn't recall it from memory, and she never followed up on the second enough.

If she'd used it repeatedly in a speech, it could have been a form of epistrophe like Obama's "Yes we can" or Martin Luther King's "I have a dream." Imagine if she had an entire speech where she kept hammering on what other cards she wanted to deal. "If ensuring that everyone in the country has an equal opportunity to reasonable healthcare is playing the [?] card then deal me in. If ensuring everyone in this country has the right to a good education is playing the [?] card then deal me in." And so on. But she would only use it once in a while, or once in a speech, whereas Obama had entire speeches where he would circle back to "Yes we can" again and again. [Maybe there isn't an equivalent to "woman card" that makes this epistrophe scalable but the broad point about her weak use of rhetoric holds.]

That's not to say "Make American Great Again" is some slogan for the ages, but it is succinct and has a loose bit of trochaic meter (MAKE ah-MERIC-uh GREAT a-GAIN) which grants it a sense of emphatic energy which all political movements need. His supporters compressed it into #MAGA which became a more liquid shorthand for social media. In general it seems the populist backlash and the alt-right are stronger at such rhetorical tricks than the Democrats or the left, but perhaps it is bred of necessity from being the opposition party?

Rhetoric can get a bad name because some lump it in with other language tricks like those used in clickbait titles. "You won't believe what happened next" or "This will restore your faith in humanity" or "ten signs you're a Samantha." Those aren't ways for making something stick, those are ways for making someone click. [Quiz: what rhetorical techniques were used in that last sentence?] Rhetoric isn't inherently good or bad; it can be used for ideas both inspiring and appalling.

There will come a day when you'll come up with some brilliant theory or concept and want it to spread and stick. You want to lay claim to that idea. It's then that you'll want to set aside some time to state it distinctively, even if you're not a gifted rhetorician. A memorable turn of phrase need not incorporate sophisticated techniques like parataxis or polysyndeton. Most everyone in tech is familiar with Marc Andreessen's "software is eating the world" and Stewart Brand's "information wants to be free." Often mere novelty is enough to elevate the mundane. You've spent all that time cooking your idea, why not spend an extra few moments plating it? It all tastes the same in your mouth but one dish will live on forever in an Instagram humblebrag pic.

If you're stuck and need some help, I highly recommend the delightful book The Elements of Eloquence: Secrets of the Perfect Turn of Phrase, whose title I remembered as simply Eloquence, which might, come to think of it, be the more memorable title.

Tower of Babel

It's been a long time since I've written, and I'm out of shape. Let's go long for this one, to make up for lost time.

This first real post of 2017 has to acknowledge the year that just concluded, which still lingers in the mind like an unwelcome houseguest who vomited on the carpet and is still passed out on the living room sofa the next morning

To begin, let's pull out two responses to the Edge annual question 2017 edition which asks "What scientific term or concept ought to be more widely known?"

The first response is from Eric Weinstein, who nominates Russell Conjugation, a term with which I'm indeed unfamiliar, and one which plays into my weakness for fascinating concepts with cryptic names.

Russell Conjugation (or “emotive conjugation”) is a presently obscure construction from linguistics, psychology and rhetoric which demonstrates how our rational minds are shielded from understanding the junior role factual information generally plays relative to empathy in our formation of opinions. I frequently suggest it as perhaps the most important idea with which almost no one seems to be familiar, as it showed me just how easily my opinions could be manipulated without any need to falsify facts.
 
...
 
The basic principle of Russell Conjugation is that the human mind is constantly looking ahead well beyond what is true or false to ask “What is the social consequence of accepting the facts as they are?”  While this line of thinking is obviously self-serving, we are descended from social creatures who could not safely form opinions around pure facts so much as around how those facts are presented to us by those we ape, trust or fear. Thus, as listeners and readers our minds generally mirror the emotional state of the source, while in our roles as authoritative narrators presenting the facts, we maintain an arsenal of language to subliminally instruct our listeners and readers on how we expect them to color their perceptions. Russell discussed this by putting three such presentations of a common underlying fact in the form in which a verb is typically conjugated:
 
I am firm. [Positive empathy]
You are obstinate. [Neutral to mildly negative empathy]
He/She/It is pigheaded.  [Very negative empathy]
 
In all three cases, Russell was describing people who did not readily change their minds. Yet by putting these descriptions so close together and without further factual information to separate the individual cases, we were forced to confront the fact that most of us feel positively towards the steadfast narrator and negatively towards the pigheaded fool, all without any basis in fact.
 

The next concept in the recipe is coalitional instincts, nominated by John Tooby. Forming coalitions was one of the skills that elevated homo sapiens to the top of the animal kingdom.

Every human—not excepting scientists—bears the whole stamp of the human condition. This includes evolved neural programs specialized for navigating the world of coalitions—teams, not groups. (Although the concept of coalitional instincts has emerged over recent decades, there is no mutually agreed term for this concept yet.) These programs enable us and induce us to form, maintain, join, support, recognize, defend, defect from, factionalize, exploit, resist, subordinate, distrust, dislike, oppose, and attack coalitions. Coalitions are sets of individuals interpreted by their members and/or by others as sharing a common abstract identity (including propensities to act as a unit, to defend joint interests, and to have shared mental states and other properties of a single human agent, such as status and prerogatives).  
 
Why do we see the world this way? Most species do not and cannot: Even those that have linear hierarchies do not: Among elephant seals, for example, an alpha can reproductively exclude other males, even though beta and gamma are physically capable of beating alpha—if only they could cognitively coordinate. The fitness payoff is enormous for solving the thorny array of cognitive and motivational computational problems inherent in acting in groups: Two can beat one, three can beat two, and so on, propelling an arms race of numbers, effective mobilization, coordination, and cohesion.
 

As with so many things in life, a source of strength is also the root of weakness. They can be a great people, Kal-El, but not if they keep ganging up on each other for no reason other than to feel the psychological comforts of being in an in-group.

This raises a problem for scientists: Coalition-mindedness makes everyone, including scientists, far stupider in coalitional collectivities than as individuals. Paradoxically, a political party united by supernatural beliefs can revise its beliefs about economics or climate without revisers being bad coalition members. But people whose coalitional membership is constituted by their shared adherence to “rational,” scientific propositions have a problem when—as is generally the case—new information arises which requires belief revision. To question or disagree with coalitional precepts, even for rational reasons, makes one a bad and immoral coalition member—at risk of losing job offers, her friends, and her cherished group identity. This freezes belief revision.  
 
Forming coalitions around scientific or factual questions is disastrous, because it pits our urge for scientific truth-seeking against the nearly insuperable human appetite to be a good coalition member. Once scientific propositions are moralized, the scientific process is wounded, often fatally.  No one is behaving either ethically or scientifically who does not make the best-case possible for rival theories with which one disagrees. 
 

You can see where I'm headed with all of this, especially if you're still staggering out of 2016 like a boxer regaining consciousness after being knocked out. "What happened?" you can read on every fighter's lips as they open their eyes and blink at the concerned faces looking down at them on the canvas. "What happened?!" asked a dazed populace after Election Day 2016.

Our next POTUS, whether wittingly or not, weaponized Russell Conjugation and our coalitional instincts and performed a jiu-jitsu toss on his opponents, leaving much of the country wondering whether winner-take-all elections for the Presidency are a good thing in a country so evenly divided.

It seems darkly appropriate, in this Internet age, that a troll is now occupying arguably the most powerful seat in the world. It's a miracle of the worst kind that someone can openly flout nearly every convention of human decency and civility, not to mention statecraft, and walk about untouched. Something's rotten in the state of Denmark alright, but no play is needed to catch the conscience of this king. It's somewhat of a miracle how every Tweet of his sifts the same factions apart; one side screams in disgust and disbelief, the other piles on with glee. Let's call that technique of tweeting the Trump Conjugation. Sad!

Many claim the internet creates filter bubbles, but I believe the mechanism by which the Internet amplifies tribalism doesn't work the way most people describe it. The common explanation is that we form networks with like-minded people and only hear the opinions of those who agree with us, reinforcing our narrow world views.

My belief is that the Internet has increased our exposure to diverse viewpoints, including those from oppositional tribes. I suspect everyone who uses the Internet regularly encounters more diverse opinions, in absolute terms, than prior to the rise of the Internet, and there is research (PDF) to support this. Our information diet is more diverse now, and as opposed to the age before social media or even the Internet itself, we're exposed to more opinions that both strongly confirm AND counter our beliefs.

This shouldn't be so surprising or counterintuitive. Instant access to lots of information is what the Internet excels at better than any medium in history.

In fact, ask many and they'll admit they yearn for the more peaceful age before they were made aware of how those in other tribes thought. The sliding scale of horror starts, relatively harmlessly, with a Facebook post from a friend you didn't realize was a staunch Republican, or an email forward from an uncle still subscribing to a casual racism acceptable in an earlier generation when his views hardened. Maybe it's a segment from Fox News which is, yes, only seen because it's excerpted on The Daily Show, but seen all the same. How can people think this way?

Move up one notch on the scale of horrors and you might find the vitriol in online comment threads attached to articles and op-eds, which one sometimes scans out of some misplaced optimism and which stuns you with the sudden violence of a drive-by-shooting. Pull on that thread of toxicity even further and you may end up encountering direct harassment on services like Twitter. In the pre-Internet age, I don't recall every having such fine-grained resolution on the opinions of the opposition. What many call a filter bubble might just be psychic defensive shields.

The pre-Internet age actually felt much more like a filter bubble, one in which we had a comforting if illusory feeling of kinship with our fellow citizens. Many signal their cosmopolitanism by decrying life in the filter bubble, but what few of those admit is that life outside the filter bubble is a brutal wasteland (minus the poetic language of a Cormac McCarthy, who might be the only one to stare into the abyss of 4chan and find in there a new bogeyman to rival Anton Chigurh for pure nihilism).

[Many disagree and still hold resolutely to the thesis that the internet has cocooned us in filter bubbles, and I'm open to that argument if supporters bring data to the table rather than just their own anecdotal impressions. If there was ever a year that should have made us all suspicious of our feelings about what was happening, 2016 was it. Anecdotal journalism is not inherently good or bad, and that is its fundamental weakness.]

What should terrify us, and what may be the real and deeper problem, is how we reacted to this explosion in diverse thought and information which the Internet unlocked. The Utopian dream was that we'd rethink our hypotheses in the face of all the new ideas and conduct rational debates like civilized adults. A more informed populace would be a wiser one.

Instead, we've regressed, forming teams and grabbing stones to hurl at each other like primates. Welcome to the jungle. 2016 felt like a Hobbesian anarchy of ideological warfare, and it's turned Twitter into a bar where drunken brawls break out every few minutes. It's a downward spiral of almost insufferable negativity from which Twitter may never recover, exacerbated by a 140 character limit of which one side effect is that even reasonable people sound smug. The English language is capable of nuance, but not often in 140 characters, another reason Twitter's absolute refusal to update that outdated rule is so short-sighted.

In contemplating 2016, I went back and reread the myth of the Tower of Babel. Here's the story from the King James edition (source: Wikipedia):

1. Now the whole world had one language and a common speech. As people moved eastward, they found a plain in Shinar and settled there.
 
2. They said to each other, “Come, let’s make bricks and bake them thoroughly.” They used brick instead of stone, and tar for mortar.
 
3 And they said, “Come, let us build ourselves a city, and a tower whose top is in the heavens; let us make a name for ourselves, lest we be scattered abroad over the face of the whole earth.”
 
4 But the Lord came down to see the city and the tower which the sons of men had built.
 
5 And the Lord said, “Indeed the people are one and they all have one language, and this is what they begin to do; now nothing that they propose to do will be withheld from them.
 
6 Come, let Us go down and there confuse their language, that they may not understand one another’s speech.”
 
7 So the Lord scattered them abroad from there over the face of all the earth, and they ceased building the city.
 
8 Therefore its name is called Babel, because there the Lord confused the language of all the earth; and from there the Lord scattered them abroad over the face of all the earth.
— Genesis 11:4–9[10]
 

So much in one short story, beginning with the recognition of the power of language to produce coordinated action, with which mankind would be capable of anything. As God phrased it, "nothing that they propose to do will be withheld from them." (it's not ironic but intentional that I pull this reference from Wikipedia, one of the modern examplars of coordinated human action). Language and money are among mankind's greatest creations, allowing for trust and coordinated action never possible previously.

Then The Tower of Babel story concludes with the division of mankind all over the Earth, a succinct metaphor for the rise of tribalism, with all its benefits and ills.

What's worth reconsidering is the causality outlined in the second half of the story. Babel (root of the word babble) claims that in giving humans different languages, he fractured humans into rival tribes incapable of coordinating with each other.

What if the causality is mistaken? What if, even when we share the same language, we cannot, will not, understand each other? That's what 2016 felt like. Russell Conjugation might be a design flaw of the English language. Even among all of us who speak English, even when we're watching the same data, whether it's video or text or economic charts, we can't seem to agree. If differences in language are what divide us, translation is a solution. But if even a common language can't overcome our tribal instincts or our mood affiliation, the solution is not as clear.

Speaking of mood affiliation, Steven Pinker, in an interview with Tyler Cowen, wondered:

I’m hoping that naming and shaming and arguments will give free speech a greater foothold in academia. The fact that academia is not the only arena in which debates are held, that we also have think tanks and we also have a press. We also have the Internet.
 
How we could set up the rules so that despite all of the quirks of human nature — such as intellectual tribalism — are overcome in our collective arena of discourses is, I think, an absolutely vital question, and I just don’t know the answer because we’re seeing at the same time — there was the hope 20 years ago that the Internet would break down the institutional barriers to the best ideas emerging.
 
It hasn’t worked out that way so far because we have the festering of conspiracy theories and all kinds of kooky beliefs that somehow the Internet has not driven out, but if anything has created space for. How we as a broader culture can tilt the rules or the norms of the expectations so that if you believe something that’s false, eventually you’ll be embarrassed about it, I wish I knew. But that’s obviously what we ought to be striving for.
 

I wish I knew, too.

Plenty of much smarter people fear Artificial Intelligence, I don't know where I fall on that debate. However, I'm firmly in the camp of hoping cars learn to drive themselves, that they'll far surpass humans in improving the safety of our roads. But if replacing human fallibility with AI is a path to social good in driving, why not elsewhere?

Humans are capable, at their peak, of being very rational thinkers. But what's concerning for the world is how rarely we operate at the limits of our potential, and in how many contexts we become irrational, or even complete idiots. AI can take many of the best aspects of human logic and scale them, make them reliable.

Some of the best CEO's I've encountered in my life, the Jeff Bezos and Mark Zuckerberg's of the world, are capable of being rational a much higher percentage of the time than the average person. They seem far less susceptible to the usual cognitive biases. When I say someone thinks like a computer, many interpret that as an insult, whereas I see it as a supreme compliment. This is why most middle managers may someday be replaced by AI.

Too much of our pop culture, especially Hollywood, venerates human emotion, despite its often crippling effect on our thinking. Nothing's wrong with that, but very few movies have the courage to follow rational thought to its extremes without softening it with some signs of humanity (read: frailty). The closest pop culture icons of rational thinking that leap to mind are Sherlock Holmes and House, the latter of which was based on Sherlock Holmes (Holmes --> Home --> House), and Spock of Star Trek. Watch enough movies about them, however, and there is always a moment where each of these characters learns about the virtues of love and humanity from Watson or Captain Kirk or another of the less rational around them.

Why pull the punch? If Spock had to confront the trolley problem, he should by all accounts save the lives of the five over the lives of the one, even if the one were his friend Kirk (I'm conveniently leaving out the option in which Spock takes Kirk's place on the train tracks, because Hollywood would, of course, choose that route). We should applaud that, but are human so we shudder.

Which leaves Us versus Them. The darker side to coordinated human action. Us versus them is so powerful a force that confronting it can feel demoralizing. It is everywhere. Even an artificial construct like sports can set people against each other in ways that incite violence. This leaves us a challenge, one which is writ small in corporate environments. How do you turn zero sum games into positive sum games? Because if you can't, perhaps we're doomed to duke it out for eternity.

Two minor pop culture spoiler alerts here, for those who haven't ready through The Dark Forest (book two of The Three Body Problem trilogy, which I just finished today and which has left me giddy to dive into the concluding book) and Watchmen, by Alan Moore and Dave Gibbons (without thinking too deeply, it's likely my favorite graphic novel of all time). If you haven't read those two, this post ends here for you, though if you've read the Watchmen and not The Dark Forest then perhaps you won't mind a minor spoiler.

The Dark Forest's entire story hinges on two axioms of cosmic sociology, described early in the book in a somewhat casual conversation:

“First: Survival is the primary need of civilization. Second: Civilization continuously grows and expands, but the total matter in the universe remains constant.”
 
To derive a basic picture of cosmic sociology from these two axioms, you need two other important concepts: chains of suspicion, and the technological explosion.
 

It's not often that a book gives you all the clues to decipher the plot up front, but one of this book's pleasures is that it does so in such an offhand way that when all becomes clear at the end, you revisit that earlier moment the way viewers scanned back through The Sixth Sense when the ring dropped to the floor at the end. It's relevant here because of the nature of zero sum games, and those with an interest in game theory will love The Dark Forest.

In Chapter XI of Watchmen (and here I'll say the spoiler is much larger, so if you haven't read it just stop here, do not pass Go, do not collect $200), Adrian Veidt reveals that at a similar moment of despair about human nature, he hatched a plan to harness the power of Us versus Them by turning all of the world into Us. To do so, he creates a fictional Them to unite the world, the famous Watchmen Monster.

It's one of the most mind-blowing mystery reveals in my fiction reading life, and I had a similar moment of delight at the end of The Dark Forest.

If you can't beat Us vs Them, just make a Them so daunting everyone joins Us (though taken out of context, yes, this looks less daunting than like a hippie Octopus tripping on LSD)

If you can't beat Us vs Them, just make a Them so daunting everyone joins Us (though taken out of context, yes, this looks less daunting than like a hippie Octopus tripping on LSD)

It may seem dire to turn to extreme science fiction plot twists for solutions to current predicament, but given the quality of the stories cited, perhaps they deserve credit for seemly remotely plausible. If or when humanity evolves past these fundamental flaws in our design, centuries down the road, it may seem from this vantage point here at the start of 2017 as much like science fiction as an iPhone might to a caveman.