10 browser tabs

1. Love in the Time of Robots

“Is it difficult to play with her?” the father asks. His daughter looks to him, then back at the android. Its mouth begins to open and close slightly, like a dying fish. He laughs. “Is she eating something?”
 
The girl does not respond. She is patient and obedient and listens closely. But something inside is telling her to resist. 
 
“Do you feel strange?” her father asks. Even he must admit that the robot is not entirely believable.
 
Eventually, after a few long minutes, the girl’s breathing grows heavier, and she announces, “I am so tired.” Then she bursts into tears.
 
That night, in a house in the suburbs, her father uploads the footage to his laptop for posterity. His name is Hiroshi Ishi­guro, and he believes this is the first record of a modern-day android.
 

Reads like the treatment for a science fiction film, some mashup of Frankenstein, Pygmalion, and Narcissus. One incredible moment after another, and I'll grab just a few excerpts, but the whole thing is worth reading.

But he now wants something more. Twice he has witnessed others have the opportunity, however confusing, to encounter their robot self, and he covets that experience. Besides, his daughter was too young, and the newscaster, though an adult, was, in his words, merely an “ordinary” person: Neither was able to analyze their android encounter like a trained scientist. A true researcher should have his own double. Flashing back to his previous life as a painter, Ishi­guro thinks: This will be another form of self-portrait. He gives the project his initials: Geminoid HI. His mechanical twin.
 

Warren Ellis, in a recent commencement speech delivered at the University of Essex, said:

Nobody predicted how weird it’s gotten out here.  And I’m a science fiction writer telling you that.  And the other science fiction writers feel the same.  I know some people who specialized in near-future science fiction who’ve just thrown their hands up and gone off to write stories about dragons because nobody can keep up with how quickly everything’s going insane.  It’s always going to feel like being thrown in the deep end, but it’s not always this deep, and I’m sorry for that.
 

The thing is, far future sci-fi is likely to be even more off base now given how humans are evolving in lock step with the technology around them. So we need more near future sci-fi, of a variety smarter than Black Mirror, to grapple with the implications.

Soon his students begin comparing him to the Geminoid—“Oh, professor, you are getting old,” they tease—and Ishi­guro finds little humor in it. A few years later, at 46, he has another cast of his face made, to reflect his aging, producing a second version of HI. But to repeat this process every few years would be costly and hard on his vanity. Instead, Ishi­guro embraces the logi­cal alternative: to alter his human form to match that of his copy. He opts for a range of cosmetic procedures—laser treatments and the injection of his own blood cells into his face. He also begins watching his diet and lifting weights; he loses about 20 pounds. “I decided not to get old anymore,” says Ishi­guro, whose English is excellent but syntactically imperfect. “Always I am getting younger.”
 
Remaining twinned with his creation has become a compulsion. “Android has my identity,” he says. “I need to be identical with my android, otherwise I’m going to lose my identity.” I think back to another photo of his first double’s construction: Its robot skull, exposed, is a sickly yellow plastic shell with openings for glassy teeth and eyeballs. When I ask what he was thinking as he watched this replica of his own head being assembled, Ishi­guro says, perhaps only half-joking, “I thought I might have this kind of skull if I removed my face.”
 
Now he points at me. “Why are you coming here? Because I have created my copy. The work is important; android is important. But you are not interested in myself.”
 

This should be some science fiction film, only I'm not sure who our great science fiction director is. The best examples may be too old to want to look upon such a story as anything other than grotesque and horrific.

2. Something is wrong on the internet by James Bridle

Of course, some of what's on the internet really is grotesque and horrific. 

Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level. 
 

Given how much my nieces love watching product unwrapping and Peppa the Pig videos on YouTube, this story was induced a sense of dread I haven't felt since the last good horror film I watched, which I can't remember anymore since the world has run a DDOS on my emotions.

We often think of a market operating at peak efficiency as sending information back and forth between supply and demand, allowing the creation of goods that satisfy both parties. In the tech industry, the wink-wink version of that is saying that pornography leads the market for any new technology, solving, as it does, the two problems the internet is said to solve better, at scale, than any medium before it: loneliness and boredom.

Bridle's piece, however, finds the dark cul-de-sacs and infected runaway processes which have branched out from the massive marketplace that is YouTube. I decided to follow a Peppa the Pig video on the service and started tapping on Related Videos, like I imagine one of my nieces doing, and quickly wandered into a dark alleyway where I saw some video which I would not want any of them watching. As Bridle did, I won't link to what I found; suffice to say it won't take you long to stumble on some of it if you want, or perhaps even if you don't.

What's particularly disturbing is the somewhat bizarre, inexplicably grotesque nature of some of these video remixes. David Cronenberg is known for his body horror films; these YouTube videos are like some perverse variant of that, playing with popular children's iconography.

Facebook and now Twitter are taking heat for disseminating fake news, and that is certainly a problem worth debating, but with that problem we're talking about adults. Children don't have the capacity to comprehend what they're seeing, and given my belief in the greater effect of sight, sound, and motion, I am even more disturbed by this phenomenon.

A system where it's free to host videos to a global audience, where this type of trademark infringement weaponizes brand signifiers with seeming impunity, married with increasingly scalable content production and remixes using technology, allows for the type of scalable problem we haven't seen before.

The internet has enabled all types of wonderful things at scale; we should not be surprised that it would foster the opposite. But we can, and should, be shocked.

3. FDA approves first blood sugar monitor without finger pricks

This is exciting. One view which seems to be common wisdom these days when it comes to health is that it's easier to lose weight and impact your health through diet than exercise. But one of the problems of the feedback loop in diet (and exercise, actually) is how slow it is. You sneak a few snacks here and there walking by the company cafeteria every day, and a month later you hop on the scale and emit a bloodcurdling scream as you realize you've gained 8 pounds.

A friend of mine had gestational diabetes during one of her pregnancies and got a home blood glucose monitor. You had to prick your finger and draw blood to get your blood glucose reading, but curious, I tried it before and after a BBQ.

To see what various foods did to my blood sugar in near real-time was a real eye-opener. Imagine in the future when one could see what a few french fries and gummy bears did to your blood sugar, or when the reading could be built into something like an Apple Watch, without having to draw blood each time. I don't mind the sight of blood, but I'd prefer not to turn my finger tips into war zones.

Faster feedback might transform dieting into something more akin to deliberate practice. Given that another popular theory of obesity is that it's an insulin phenomenon, tools like this, built for diabetes, might have much mass market impact.

4.  Ingestable ketones

Ingestable ketones have been a recent sort of holy grail for endurance athletes, and now HVMN is bringing one to market. Ketogenic diets are all the rage right now, but for an endurance athlete, the process of being able to fuel oneself on ketones has always sounded like a long and miserable process.

The body generates ketones from fat when low on carbs or from fasting. The theory is that endurance athletes using ketones rather than glycogen from carbs require less oxygen and thus can work out longer.

I first heard about the possibility of exogenous ketones for athletes from Peter Attia. As he said then, perhaps the hardest thing about ingesting exogenous ketones is the horrible taste, which caused him to gag and nearly vomit in his kitchen. It doesn't sound like the taste problem has been solved.

Until we get the pill that renders exercise obsolete, however, I'm curious to give this a try. If you decide to pre-order, you can use my referral code to get $15 off.

5. We Are Nowhere Close to the Limits of Athletic Performance

By comparison, the potential improvements achievable by doping effort are relatively modest. In weightlifting, for example, Mike Israetel, a professor of exercise science at Temple University, has estimated that doping increases weightlifting scores by about 5 to 10 percent. Compare that to the progression in world record bench press weights: 361 pounds in 1898, 363 pounds in 1916, 500 pounds in 1953, 600 pounds in 1967, 667 pounds in 1984, and 730 pounds in 2015. Doping is enough to win any given competition, but it does not stand up against the long-term trend of improving performance that is driven, in part, by genetic outliers. As the population base of weightlifting competitors has increased, outliers further and further out on the tail of the distribution have appeared, driving up world records.
 
Similarly, Lance Armstrong’s drug-fuelled victory of the 1999 Tour de France gave him a margin of victory over second-place finisher Alex Zulle of 7 minutes, 37 seconds, or about 0.1 percent.3 That pales in comparison to the dramatic secular increase in speeds the Tour has seen over the past half century: Eddy Merckx won the 1971 tour, which was about the same distance as the 1999 tour, in a time 5 percent worse than Zulle’s. Certainly, some of this improvement is due to training methods and better equipment. But much of it is simply due to the sport’s ability to find competitors of ever more exceptional natural ability, further and further out along the tail of what’s possible.
 

In the Olympics, to take the most celebrated athletic competition, victors are celebrated with videos showing them swimming laps, tossing logs in a Siberian tundra, running through a Kenyan desert. We celebrate the work, the training. Good genes are given narrative short shrift. Perhaps we should show a picture of their DNA, just to give credit where much credit is due?

If I live a normal human lifespan, I expect to live to see special sports leagues and divisions created for athletes who've undergone genetic modification in the future. It will be the return of the freak show at the circus, but this time for real. I've sat courtside and seen people like Lebron James, Giannis Antetokounmpo, Kevin Durant, and Joel Embiid walk by me. They are freaks, but genetic engineering might produce someone who stretch our definition of outlier.

In other words, it is highly unlikely that we have come anywhere close to maximum performance among all the 100 billion humans who have ever lived. (A completely random search process might require the production of something like a googol different individuals!)
 
But we should be able to accelerate this search greatly through engineering. After all, the agricultural breeding of animals like chickens and cows, which is a kind of directed selection, has easily produced animals that would have been one in a billion among the wild population. Selective breeding of corn plants for oil content of kernels has moved the population by 30 standard deviations in roughly just 100 generations.6 That feat is comparable to finding a maximal human type for a specific athletic event. But direct editing techniques like CRISPR could get us there even faster, producing Bolts beyond Bolt and Shaqs beyond Shaq.
 

6. Let's set half a percent as the standard for statistical significance

My many-times-over coauthor Dan Benjamin is the lead author on a very interesting short paper "Redefine Statistical Significance." He gathered luminaries from many disciplines to jointly advocate a tightening of the standards for using the words "statistically significant" to results that have less than a half a percent probability of occurring by chance when nothing is really there, rather than all results that—on their face—have less than a 5% probability of occurring by chance. Results with more than a 1/2% probability of occurring by chance could only be called "statistically suggestive" at most. 
 
In my view, this is a marvelous idea. It could (a) help enormously and (b) can really happen. It can really happen because it is at heart a linguistic rule. Even if rigorously enforced, it just means that editors would force people in papers to say "statistically suggestive for a p of a little less than .05, and only allow the phrase "statistically significant" in a paper if the p value is .005 or less. As a well-defined policy, it is nothing more than that. Everything else is general equilibrium effects.
 

Given the replication crisis has me doubting almost every piece of conventional wisdom I've inherited in my life, I'm okay with this.

7. We're surprisingly unaware of when our own beliefs change

If you read an article about a controversial issue, do you think you’d realise if it had changed your beliefs? No one knows your own mind like you do – it seems obvious that you would know if your beliefs had shifted. And yet a new paper in The Quarterly Journal of Experimental Psychology suggests that we actually have very poor “metacognitive awareness” of our own belief change, meaning that we will tend to underestimate how much we’ve been swayed by a convincing article.
 
The researchers Michael Wolfe and Todd Williams at Grand Valley State University said their findings could have implications for the public communication of science. “People may be less willing to meaningfully consider belief inconsistent material if they feel that their beliefs are unlikely to change as a consequence,” they wrote.
 

Beyond being an interesting result, I link to this as an example of a human readable summary of a research paper. This his how this article summarize the research study and its results:

The researchers recruited over two hundred undergrads across two studies and focused on their beliefs about whether the spanking/smacking of kids is an effective form of discipline. The researchers chose this topic deliberately in the hope the students would be mostly unaware of the relevant research literature, and that they would express a varied range of relatively uncommitted initial beliefs.
 
The students reported their initial beliefs about whether spanking is an effective way to discipline a child on a scale from “1” completely disbelieve to “9” completely believe. Several weeks later they were given one of two research-based texts to read: each was several pages long and either presented the arguments and data in favour of spanking or against spanking. After this, the students answered some questions to test their comprehension and memory of the text (these measures varied across the two studies). Then the students again scored their belief in whether spanking is effective or not (using the same 9-point scale as before). Finally, the researchers asked them to recall what their belief had been at the start of the study.
 
The students’ belief about spanking changed when they read a text that argued against their own initial position. Crucially, their memory of their initial belief was shifted in the direction of their new belief – in fact, their memory was closer to their current belief than their original belief. The more their belief had changed, the larger this memory bias tended to be, suggesting the students were relying on their current belief to deduce their initial belief. The memory bias was unrelated to the measures of how well they’d understood or recalled the text, suggesting these factors didn’t play a role in memory of initial belief or awareness of belief change.
 

Compare this link above to the abstract of the paper itself:

When people change beliefs as a result of reading a text, are they aware of these changes? This question was examined for beliefs about spanking as an effective means of discipline. In two experiments, subjects reported beliefs about spanking effectiveness during a prescreening session. In a subsequent experimental session, subjects read a one-sided text that advocated a belief consistent or inconsistent position on the topic. After reading, subjects reported their current beliefs and attempted to recollect their initial beliefs. Subjects reading a belief inconsistent text were more likely to change their beliefs than those who read a belief consistent text. Recollections of initial beliefs tended to be biased in the direction of subjects’ current beliefs. In addition, the relationship between the belief consistency of the text read and accuracy of belief recollections was mediated by belief change. This belief memory bias was independent of on-line text processing and comprehension measures, and indicates poor metacognitive awareness of belief change.
 

That's actually one of the better research abstracts you'll read and still it reflects the general opacity of the average research abstract. I'd argue that some of the most important knowledge in the world is locked behind abstruse abstracts.

Why do researchers write this way? Most tell me that researchers write for other researchers, and incomprehensible prose like this impresses their peers. What a tragedy. As my longtime readers know, I'm a firm believer in the power of the form of a message. We continue to underrate that in all aspects of life, from the corporate world to our personal lives, and here, in academia.

Then again, such poor writing keeps people like Malcolm Gladwell busy transforming such insight into breezy reads in The New Yorker and his bestselling books.

8. Social disappointment explains chimpanzees' behaviour in the inequity aversion task

As an example of the above phenomenon, this paper contains an interesting conclusion, but try to parse this abstract:

Chimpanzees’ refusal of less-preferred food when an experimenter has previously provided preferred food to a conspecific has been taken as evidence for a sense of fairness. Here, we present a novel hypothesis—the social disappointment hypothesis—according to which food refusals express chimpanzees' disappointment in the human experimenter for not rewarding them as well as they could have. We tested this hypothesis using a two-by-two design in which food was either distributed by an experimenter or a machine and with a partner present or absent. We found that chimpanzees were more likely to reject food when it was distributed by an experimenter rather than by a machine and that they were not more likely to do so when a partner was present. These results suggest that chimpanzees’ refusal of less-preferred food stems from social disappointment in the experimenter and not from a sense of fairness.
 

Your average grade school English teacher would slap a failing grade on this butchery of the English language.

9. Metacompetition: Competing Over the Game to be Played

When CDMA-based technologies took off in the US, companies like QualComm that work on that standard prospered; metacompetitions between standards decide the fates of the firms that adopt (or reject) those standards.

When an oil spill raises concerns about the environment, consumers favor businesses with good environmental records; metacompetitions between beliefs determine the criteria we use to evaluate whether a firm is “good.”

If a particular organic foods certification becomes important to consumers, companies with that certification are favored; metacompetitions between certifications determines how the quality of firms is measured.
 
In all these examples, you could be the very best at what you do, but lose in the metacompetition over what criteria will matter. On the other hand, you may win due to a metacompetition that protects you from fierce rivals who play a different game.
 
Great leaders pay attention to metacompetition. They advocate the game they play well, promoting criteria on which they measure up. By contrast, many failed leaders work hard at being the best at what they do, only to throw up their hands in dismay when they are not even allowed to compete. These losers cannot understand why they lost, but they have neglected a fundamental responsibility of leadership. It is not enough to play your game well. In every market in every country, alternative “logics” vie for prominence. Before you can win in competition, you must first win the metacompetition over the game being played.
 

In sports negotiations between owners and players, the owners almost always win the metacompetition game. In the writer's strike in Hollywood in 2007, the writer's guild didn't realize they were losing the metacompetition and thus ended up worse off than before. Amazon surpassed eBay by winning the retail metacompetition (most consumers prefer paying a good, fixed price for a good of some predefined quality than dealing with the multiple axes of complexity of an auction) after first failing at tackling eBay on its direct turf of auctions.

Winning the metacompetition means first being aware of what it is. It's not so easy in a space like, say, social networking, where even some of the winners don't understand what game they're playing.

10. How to be a Stoic

Much of Epictetus’ advice is about not getting angry at slaves. At first, I thought I could skip those parts. But I soon realized that I had the same self-recriminatory and illogical thoughts in my interactions with small-business owners and service professionals. When a cabdriver lied about a route, or a shopkeeper shortchanged me, I felt that it was my fault, for speaking Turkish with an accent, or for being part of an élite. And, if I pretended not to notice these slights, wasn’t I proving that I really was a disengaged, privileged oppressor? Epictetus shook me from these thoughts with this simple exercise: “Starting with things of little value—a bit of spilled oil, a little stolen wine—repeat to yourself: ‘For such a small price, I buy tranquillity.’ ”
 
Born nearly two thousand years before Darwin and Freud, Epictetus seems to have anticipated a way out of their prisons. The sense of doom and delight that is programmed into the human body? It can be overridden by the mind. The eternal war between subconscious desires and the demands of civilization? It can be won. In the nineteen-fifties, the American psychotherapist Albert Ellis came up with an early form of cognitive-behavioral therapy, based largely on Epictetus’ claim that “it is not events that disturb people, it is their judgments concerning them.” If you practice Stoic philosophy long enough, Epictetus says, you stop being mistaken about what’s good even in your dreams.
 

The trendiness of stoicism has been around for quite some time now. I found this tab left over from 2016, and I'm sure Tim Ferriss was espousing it long before then, and not to mention the enduring trend that is Buddhism. That meditation and stoicism are so popular in Silicon Valley may be a measure of the complacency of the region; these seem direct antidotes to the most first world of problems. People everywhere complain of the stresses on their mind from the deluge of information they receive for free from apps on the smartphone with processing power that would put previous supercomputers to shame.

Still, given that stoicism was in vogue in Roman times, it seems to have stood the test of time. Since social media seems to have increased the surface area of our social fabric and our exposure to said fabric, perhaps we could all use a bit more stoicism in our lives. I suspect one reason Curb Your Enthusiasm curdles in the mouth more than before is not just that his rich white man's complaints seem particularly ill timed in the current environment but that he is out of touch with the real nature of most people's psychological stressors now. A guy of his age and wealth probably doesn't spend much time on social media, but if he did, he might realize his grievances no longer match those of the average person in either pettiness or peculiarity.

1 personal update and 10 browser tabs

I haven't spent much time on personal updates here the past several years, but it matters on some topics to know what my personal affiliations are, so I wanted to share that I've left Oculus as of mid July. I'm not sure what's next yet, but I've enjoyed having some time to travel to see friends and family, catch up on reading, and get outside on my bike the past few weeks.

It's also been great to have the chance to connect with some of the smart people of the Bay Area, many of whom I've never met before except online. Bouncing ideas around with some of the interesting thinkers here is something I wish I did more of sooner, and I'm trying to make up for lost time. It has certainly helped me to refine my thinking about many topics, including my next venture, whatever that turns out to be.

Please ping me if you'd like to grab a coffee.

***

One of my goals during this break is to clear out a lot of things I've accumulated over the years. I donated six massive boxes of books to the local library the other week, and I've been running to a Goodwill dropoff center every few days. 

The other cruft I've accumulated is of a digital nature, mostly an embarrassing number of browser tabs, some of which have been open since before an orange president became the new black. Such digital cruft is no less a mental burden than its physical counterparts, so I'm going to start to zap them, ten at a time. I have to; my Macbook Pro can no longer handle the sheer volume of tabs open, the fan is always on full throttle like a jet engine.

Here are the first ten to go.

1. The War Against Chinese Restaurants

Startlingly, however, there was once a national movement to eliminate Chinese restaurants, using innovative legal methods to drive them out. Chinese restaurants were objectionable for two reasons. First, they threatened white women, who were subject to seduction by Chinese men, through intrinsic female weakness, or employment of nefarious techniques such as opium addiction. In addition, Chinese restaurants competed with “American” restaurants, thus threatening the livelihoods of white owners, cooks and servers; unions were the driving force behind the movement. 


The effort was creative; Chicago used anti-Chinese zoning, Los Angeles restricted restaurant jobs to citizens, Boston authorities decreed Chinese restaurants would be denied licenses, the New York Police Department simply ordered whites out of Chinatown. Perhaps the most interesting technique was a law, endorsed by the American Federation of Labor for adoption in all jurisdictions, prohibiting white women from working in Asian restaurants. Most measures failed or were struck down. However, Asians still lost; the unions did not eliminate Chinese restaurants, but they achieved their more important goal, extending the federal policy of racial exclusion in immigration from Chinese to all Asians. The campaign is of more than historical interest. As current anti-immigration sentiments and efforts show, even today the idea that white Americans should have a privileged place in the economy, or that non-whites are culturally incongruous, persists among some.
 

The core of the story of America is its deep seated struggle with race, not surprising for a country founded on the ideal of the equality of all even as it could not live up to that itself, in its founding moment. That continual grasping at resolving that paradox and hypocrisy is at the heart of what makes the U.S. the most fascinating social experiment in the world, and one reason I struggle to imagine living elsewhere right now.

2. Two related pieces: The revolt of the public and the “age of post-truth” and In Defense of Hierarchy 

From the former:

A complex society can’t dispense with elites.  That is the hard reality of our condition, and it involves much more than a demand for scarce technical skills.  In all human history, across continents and cultures, the way to get things done has been command and control within a formal hierarchy.  The pyramid can be made flatter or steeper, and an informal network is invariably overlaid on it:  but the structural necessity holds.  Only a tiny minority can be bishops of the church.  This may seem trivially apparent when it comes to running a government or managing a corporation, but it applies with equal strength to the dispensation of truth.
 
So here is the heart of the matter.  The sociopolitical disorders that torment our moment in history, including the fragmentation of truth into “post-truth,” flow primarily from a failure of legitimacy, of the bond of trust between rulers and ruled.  Everything begins with the public’s conviction that elites have lost their authorizing magic.  Those at the top have forsaken their function yet cling, illicitly, to their privileged perches.  Only in this context do we come to questions of equality or democracy.
 
If my analysis is correct, the re-formation of the system, and the recovery of truth, must depend on the emergence of a legitimate elite class.
 

From the latter:

To protect against abuse by those with higher status, hierarchies should also be domain-specific: hierarchies become problematic when they become generalised, so that people who have power, authority or respect in one domain command it in others too. Most obviously, we see this when holders of political power wield disproportionate legal power, being if not completely above the law then at least subject to less legal accountability than ordinary citizens. Hence, we need to guard against what we might call hierarchical drift: the extension of power from a specific, legitimate domain to other, illegitimate ones. 
 
This hierarchical drift occurs not only in politics, but in other complex human arenas. It’s tempting to think that the best people to make decisions are experts. But the complexity of most real-world problems means that this would often be a mistake. With complicated issues, general-purpose competences such as open-mindedness and, especially, reasonableness are essential for successful deliberation.
 
Expertise can actually get in the way of these competences. Because there is a trade-off between width and depth of expertise, the greater the expert, the narrower the area of competence. Hence the best role for experts is often not as decision-makers, but as external resources to be consulted by a panel of non-specialist generalists selected for general-purpose competences. These generalists should interrogate the experts and integrate their answers from a range of specialised aspects into a coherent decision. So, for example, parole boards cannot defer to one type of expert but must draw on the expertise of psychologists, social workers, prison guards, those who know the community into which a specific prisoner might be released, and so on. This is a kind of collective, democratic decision-making that makes use of hierarchies of expertise without slavishly deferring to them.  
 

What would constitute a new legitimate elite class? It's a mystery, and a grave one. When truth is largely socially and politically constructed, it weighs nothing. The whole psychology replication crisis couldn't have hit at a worse time. With the internet, you can Google and find a study to back up just about any of your views, yet it's not clear which of the studies are actually sound.

At the same time, we can't all be expected to be experts on everything, even if, with the internet, everyone pretends to be.

3. Why Men Don't Live As Long As Women

Evidence points at testosterone, which is useful for mating but costly in many other ways. I maintain there is nothing more frightening in the world than a bunch of single young men full of testosterone.

This does not mean, however, that men cannot evolve other reproductive strategies. Despite their propensity to engage in risky behavior and exhibit expensive, life-shortening physical traits, men have evolved an alternative form of reproductive effort in the form of paternal investment—something very rare in primates (and mammals in general). For paternal investment to evolve, males have to make sure they are around to take care of their offspring. Risky behavior and expensive tissue have to take a backseat to investment that reflects better health and perhaps prolongs lifespan. Indeed, men can exhibit declines in testosterone and put on a bit of weight when they become fathers and engage in paternal care.10, 11 Perhaps, then, fatherhood is good for health.
 

Perhaps we should be extolling the virtuous signal that is dadbod to a much greater degree than we have. And, on the flipside, we should look with a skeptical eye on fathers with chiseled abs. How does one get a six pack from attending imaginary tea parties with one's daughter for hours on end?

4. Increasing consumer well-being: risk as potential driver of happiness

We show that, even if, ex ante, consumers fear high risk and do not associate it to a high level of happiness, their ex post evaluation of well-being is generally higher when identical consequences result from a high-risk situation than from a low-risk situation. Control over risk-taking reinforces the gap between ex ante and ex post measures of happiness. Thus, our article provides empirical evidence about a positive relation between risk and individual well-being, suggesting that risky experiences have the potential to increase consumer well-being.
 

While I'm not certain what I'm going to do next, I would like to increase my risk profile. It seems a shame not to when I'm fortunate enough to live with very little downside risk.

5. “When the student is ready the teacher will appear. When the student is truly ready, the teacher will disappear.”  —  Lao Tzu

There is some debate over the provenance of this quote but I have rarely read the second sentence, the first part is the one which has had the more enduring life, and for good reason. It's a rhetorical gem.

The second half is underrated. The best coaches know when stepping aside and pushing the student to new challenges is the only path to greater heights. Rather than becoming some Girardian rival like Bill Murray in Rushmore, the best teachers disappear. In the case of Yoda in Return of the Jedi, he literally disappears, though not until giving Luke his next homework assignment: to face Darth Vader.

6. The third wave of globalisation may be the hardest

First we enabled the movement of goods across borders. Then the internet unleashed the movement of ideas. Free movement of people, though? Recent nationalist backlashes aren't a promising sign. Maybe it will happen in its fullest online, maybe in virtual reality.

I am pro-immigration; my life is in so many ways the result of my parents coming to America in college. For decades, the United States has had essentially first pick of the world's hungriest, most talented dreamers, like a sports team that gets to pick at the top of the draft year after year despite winning the championship the year before. Trust the process, as Sam Hinkie might say.

On the other hand, taking off my American goggles, the diversity in the world's cultures, political and social systems, and ideologies is a source of global health. It feels like everyone should be encouraged (and supported) to spend a year abroad before, during, or after college, prior to entering the world, just to understand just how much socially acquired knowledge is path dependent and essentially arbitrary. 

7. Tyler Cowen's Reddit AMA

What is the most underrated city in the US? In the world?
TylerCowen
Los Angeles is my favorite city in the whole world, just love driving around it, seeing the scenery, eating there. I still miss living in the area.

I don't know if I have a favorite city in the world, but I'd agree Los Angeles is the most underrated city in the U.S. considering how many people spit on its very mention. Best dining destination of the major U.S. cities.

8. What are the hardest and easiest languages to learn?

Language Log offers a concise scale as a shorthand answer.

EASY
1. Mandarin (spoken)
2. Nepali
3. Russian
4. Japanese
5. Sanskrit
6. Chinese (written)
HARD
 

9. Under mentioned side effect of global warming

Time was, the cold and remoteness of the far north kept its freezer door closed to a lot of contagion. Now the north is neither so cold nor so remote. About four million people live in the circumpolar north, sometimes in sizable cities (Murmansk and ­Norilsk, Russia; Tromso, Norway). Oil rigs drill. Tourist ships cruise the Northwest Passage. And as new animals and pathogens arrive and thrive in the warmer, more crowded north, some human sickness is on the rise, too. Sweden saw a record number of tick-borne encephalitis cases in 2011, and again in 2012, as roe deer expanded their range northward with ticks in tow. Researchers think the virus the ticks carry may increase its concentrations in warmer weather. The bacterium Francisella tularensis, which at its worst is so lethal that both the U.S. and the USSR weaponized it during the Cold War, is also on the increase in Sweden. Spread by mosquitoes there, the milder form can cause months of flu-like symptoms. Last summer in Russia’s far north, anthrax reportedly killed a grandmother and a boy after melting permafrost released spores from epidemic-killed deer that had been buried for decades in the once frozen ground.
 

Because we don't already have enough sobering news in the world.

10. Why do our musical tastes ossify in our twenties?

It’s simply not realistic to expect someone to respond to music with such life-defining fervour more than once. And it’s not realistic, either, to expect someone comfortable with his personality to be flailing about for new sensibilities to adopt. I’ve always been somewhat suspicious of those who truly do, as the overused phrase has it, listen to everything. Such schizophrenic tastes seem not so much a symptom of well-roundedness as of an unstable sense of self. Liking everything means loving nothing. If you’re so quick to adopt new sentiments and their expression, then how serious were you about the ones you pushed aside to accommodate them?
 
Oh yeah, and one more thing: music today fucking sucks.
 

I still pursue new music, despite being past my twenties, driven mostly, I suspect, by a hunger for novelty that still seems to be kicking. At some point, I can't really recall when, the signaling function of my musical tastes lost most of its value. Once most of your friends have kids, you can seem cultured merely by having seen a movie that's released in the last year.

Aposematism

From Wikipedia:

Aposematism (from Greek ἀπό apo away, σ̑ημα sema sign) was a new term coined by Edward Bagnall Poulton for Alfred Russel Wallace's concept of warning coloration. It describes a family of antipredator adaptations in which a warning signal is associated with the unprofitability of a prey item to potential predators. Aposematism always involves an advertising signal. The warning signal may take the form of conspicuous animal colorationsoundsodours or other perceivable characteristics. Aposematic signals are beneficial for both the predator and prey, since both avoid potential harm.
 
Aposematism is exploited in Müllerian mimicry, where species with strong defences evolve to resemble one another. By mimicking similarly coloured species, the warning signal to predators is shared, causing them to learn more quickly at less of a cost to each of the species.
 
Warning signals do not necessarily require that a species actually possesses chemical or physical defences to deter predators. Mimics such as the nonvenomous California mountain kingsnake (Lampropeltis zonata), which has yellow, red, and black bands similar to those of highly venomous coral snake species, have essentially piggybacked on the successful aposematism of the model. The evolution of a warning signal by a mimicking species that resembles a species that possesses strong defences is known as Batesian mimicry.
 

Some wonderful terms in there like aposematism, Müllerian mimicry, Batesian mimicry. I recently wrote about the value of using rhetoric to encode one's ideas. German is gloriously efficient in its compression, with single words like schadenfreude for phenomenon that require many more words in English, but fields like science, philosophy, psychology, and sociology offer their own concise gems of language.

The rest of the entry is a good read on why aposematism might have come to be despite seeming to be an evolutionary paradox, and just what Müllerian and Batesian mimicry refer to.

What intrigues me about aposematism is how it functions as a really overt form of signaling. I find myself reaching for the framework of signaling theory often these days, perhaps because once you've lived a certain number of years, you've had a chance to see the relative effectiveness or futility of past signals, but you're not yet old enough to be at the point where your remaining days are so few or your status in the world so static that signals no longer matter.

Thermodynamic theory of evolution

The teleology and historical contingency of biology, said the evolutionary biologist Ernst Mayr, make it unique among the sciences. Both of these features stem from perhaps biology’s only general guiding principle: evolution. It depends on chance and randomness, but natural selection gives it the appearance of intention and purpose. Animals are drawn to water not by some magnetic attraction, but because of their instinct, their intention, to survive. Legs serve the purpose of, among other things, taking us to the water.
 
Mayr claimed that these features make biology exceptional — a law unto itself. But recent developments in nonequilibrium physics, complex systems science and information theory are challenging that view.
 
Once we regard living things as agents performing a computation — collecting and storing information about an unpredictable environment — capacities and considerations such as replication, adaptation, agency, purpose and meaning can be understood as arising not from evolutionary improvisation, but as inevitable corollaries of physical laws. In other words, there appears to be a kind of physics of things doing stuff, and evolving to do stuff. Meaning and intention — thought to be the defining characteristics of living systems — may then emerge naturally through the laws of thermodynamics and statistical mechanics.
 

One of the most fascinating reads of my past half year.

I recently linked to the a short piece by Pinker on how an appreciation for the second law of thermodynamics might help one come to some peace with the entropy of the world. It's inevitable, so don't blame yourself.

And yet there is something beautiful about life in its ability to create pockets of order and information amidst the entropy and chaos.

A genome, then, is at least in part a record of the useful knowledge that has enabled an organism’s ancestors — right back to the distant past — to survive on our planet. According to David Wolpert, a mathematician and physicist at the Santa Fe Institute who convened the recent workshop, and his colleague Artemy Kolchinsky, the key point is that well-adapted organisms are correlated with that environment. If a bacterium swims dependably toward the left or the right when there is a food source in that direction, it is better adapted, and will flourish more, than one  that swims in random directions and so only finds the food by chance. A correlation between the state of the organism and that of its environment implies that they share information in common. Wolpert and Kolchinsky say that it’s this information that helps the organism stay out of equilibrium — because, like Maxwell’s demon, it can then tailor its behavior to extract work from fluctuations in its surroundings. If it did not acquire this information, the organism would gradually revert to equilibrium: It would die.
 
Looked at this way, life can be considered as a computation that aims to optimize the storage and use of meaningful information. And life turns out to be extremely good at it. Landauer’s resolution of the conundrum of Maxwell’s demon set an absolute lower limit on the amount of energy a finite-memory computation requires: namely, the energetic cost of forgetting. The best computers today are far, far more wasteful of energy than that, typically consuming and dissipating more than a million times more. But according to Wolpert, “a very conservative estimate of the thermodynamic efficiency of the total computation done by a cell is that it is only 10 or so times more than the Landauer limit.”
 
The implication, he said, is that “natural selection has been hugely concerned with minimizing the thermodynamic cost of computation. It will do all it can to reduce the total amount of computation a cell must perform.” In other words, biology (possibly excepting ourselves) seems to take great care not to overthink the problem of survival. This issue of the costs and benefits of computing one’s way through life, he said, has been largely overlooked in biology so far.
 

I don't know if that's true, but it is so elegant as to be breathtaking. What this all leads to is a theory of a new form of evolution, different from the Darwinian definition.

Adaptation here has a more specific meaning than the usual Darwinian picture of an organism well-equipped for survival. One difficulty with the Darwinian view is that there’s no way of defining a well-adapted organism except in retrospect. The “fittest” are those that turned out to be better at survival and replication, but you can’t predict what fitness entails. Whales and plankton are well-adapted to marine life, but in ways that bear little obvious relation to one another.
 
England’s definition of “adaptation” is closer to Schrödinger’s, and indeed to Maxwell’s: A well-adapted entity can absorb energy efficiently from an unpredictable, fluctuating environment. It is like the person who keeps his footing on a pitching ship while others fall over because she’s better at adjusting to the fluctuations of the deck. Using the concepts and methods of statistical mechanics in a nonequilibrium setting, England and his colleagues argue that these well-adapted systems are the ones that absorb and dissipate the energy of the environment, generating entropy in the process.
 

I'm tempted to see an analogous definition of successful corporate adaptation in this, though I'm inherently skeptical of analogy and metaphor. Still, reading a paragraph like this, one can't think of how critical it is for companies to remember the right lessons from the past, and not too many of the wrong ones.

There’s a thermodynamic cost to storing information about the past that has no predictive value for the future, Still and colleagues show. To be maximally efficient, a system has to be selective. If it indiscriminately remembers everything that happened, it incurs a large energy cost. On the other hand, if it doesn’t bother storing any information about its environment at all, it will be constantly struggling to cope with the unexpected. “A thermodynamically optimal machine must balance memory against prediction by minimizing its nostalgia — the useless information about the past,’’ said a co-author, David Sivak, now at Simon Fraser University in Burnaby, British Columbia. In short, it must become good at harvesting meaningful information — that which is likely to be useful for future survival.
 

This theory even offers its own explanation for death.

It’s certainly not simply a matter of things wearing out. “Most of the soft material we are made of is renewed before it has the chance to age,” Meyer-Ortmanns said. But this renewal process isn’t perfect. The thermodynamics of information copying dictates that there must be a trade-off between precision and energy. An organism has a finite supply of energy, so errors necessarily accumulate over time. The organism then has to spend an increasingly large amount of energy to repair these errors. The renewal process eventually yields copies too flawed to function properly; death follows.
 
Empirical evidence seems to bear that out. It has been long known that cultured human cells seem able to replicate no more than 40 to 60 times (called the Hayflick limit) before they stop and become senescent. And recent observations of human longevity have suggested that there may be some fundamental reason why humans can’t survive much beyond age 100.
 

Again, it's tempting to look beyond humans and at corporations, and why companies die out over time. Is there a similar process at work, in which a company's replication of its knowledge introduces more and more errors over time until the company dies a natural death?

I suspect the mechanism at work in companies is different, but that's an article for another time. The parallel that does seem promising is the idea that successful new companies are more likely to emerge in fluctuating nonequilibrium environments, not from gentle and somewhat static ones.

Clustered regularly interspaced short palindromic repeats

For those wondering what the deal with CRISPR is, Michael Specter offers a riveting overview in the New Yorker.

The field has moved quickly. For scientists, ordering genes is almost Amazon-like in its convenience now.

Ordering the genetic parts required to tailor DNA isn’t as easy as buying a pair of shoes from Zappos, but it seems to be headed in that direction. Yan turned on the computer at his lab station and navigated to an order form for a company called Integrated DNA Technologies, which synthesizes biological parts. “It takes orders online, so if I want a particular sequence I can have it here in a day or two,” he said. That is not unusual. Researchers can now order online almost any biological component, including DNA, RNA, and the chemicals necessary to use them. One can buy the parts required to assemble a working version of the polio virus (it’s been done) or genes that, when put together properly, can make feces smell like wintergreen. In Cambridge, I.D.T. often makes same-day deliveries. Another organization, Addgene, was established, more than a decade ago, as a nonprofit repository that houses tens of thousands of ready-made sequences, including nearly every guide used to edit genes with CRISPR. When researchers at the Broad, and at many other institutions, create a new guide, they typically donate a copy to Addgene.


The field has achieved some level of efficiency with the creation of editable mice.

The vivarium at the Broad houses an entirely different kind of mouse, one that carries the protein Cas9 (which stands for CRISPR-associated nuclease) in every cell. Cas9, the part of the CRISPR system that acts like a genetic scalpel, is an enzyme. When scientists originally began editing DNA with CRISPR, they had to inject both the Cas9 enzyme and the probe required to guide it. A year ago, Randall Platt, another member of Zhang’s team, realized that it would be possible to cut the CRISPR system in two. He implanted the surgical enzyme into a mouse embryo, which made it a part of the animal’s permanent genome. Every time a cell divided, the Cas9 enzyme would go with it. In other words, he and his colleagues created a mouse that was easy to edit. Last year, they published a study explaining their methodology, and since then Platt has shared the technique with more than a thousand laboratories around the world.

The “Cas9 mouse” has become the first essential tool in the emerging CRISPR arsenal. With the enzyme that acts as molecular scissors already present in every cell, scientists no longer have to fit it onto an RNA guide. They can dispatch many probes at once and simply make mutations in the genes they want to study.


This:

He stood up and walked across the office toward his desk, then pointed at the wall and described his vision for the future of cancer treatment. “There will be an enormous chart,” he said. “Well, it will be electronic, and it will contain the therapeutic road map of every trick that cancer cells have—how they form, all the ways you can defeat them, and all the ways they can escape and defeat a treatment. And when we have that we win. Because every cancer cell starts naïve. It doesn’t know what we have waiting in the freezer for it. Infectious diseases are a different story; they share their knowledge as they spread. They learn from us as they move from person to person. But every person’s cancer starts naïve. And this is why we will beat it.”


It's a story with all the usual trappings of a technology race. Patent battles and intellectual property lawsuits. Stunning breakthroughs. And of course, the dystopia nightmares that seem to accompany genetics more than any other form of science.

Doudna is a highly regarded biochemist, but she told me that not long ago she considered attending medical school or perhaps going into business. She said that she wanted to have an effect on the world and had begun to fear that the impact of her laboratory research might be limited. The promise of her work on CRISPR, however, has persuaded her to remain in the lab. She told me that she was constantly amazed by its potential, but when I asked if she had ever wondered whether the powerful new tool might do more harm than good she looked uncomfortable. “I lie in bed almost every night and ask myself that question,” she said. “When I’m ninety, will I look back and be glad about what we have accomplished with this technology? Or will I wish I’d never discovered how it works?”

Her eyes narrowed, and she lowered her voice almost to a whisper. “I have never said this in public, but it will show you where my psyche is,” she said. “I had a dream recently, and in my dream”—she mentioned the name of a leading scientific researcher—“had come to see me and said, ‘I have somebody very powerful with me who I want you to meet, and I want you to explain to him how this technology functions.’ So I said, Sure, who is it? It was Adolf Hitler. I was really horrified, but I went into a room and there was Hitler. He had a pig face and I could only see him from behind and he was taking notes and he said, ‘I want to understand the uses and implications of this amazing technology.’ I woke up in a cold sweat. And that dream has haunted me from that day. Because suppose somebody like Hitler had access to this—we can only imagine the kind of horrible uses he could put it to.”