Thermodynamic theory of evolution

The teleology and historical contingency of biology, said the evolutionary biologist Ernst Mayr, make it unique among the sciences. Both of these features stem from perhaps biology’s only general guiding principle: evolution. It depends on chance and randomness, but natural selection gives it the appearance of intention and purpose. Animals are drawn to water not by some magnetic attraction, but because of their instinct, their intention, to survive. Legs serve the purpose of, among other things, taking us to the water.
 
Mayr claimed that these features make biology exceptional — a law unto itself. But recent developments in nonequilibrium physics, complex systems science and information theory are challenging that view.
 
Once we regard living things as agents performing a computation — collecting and storing information about an unpredictable environment — capacities and considerations such as replication, adaptation, agency, purpose and meaning can be understood as arising not from evolutionary improvisation, but as inevitable corollaries of physical laws. In other words, there appears to be a kind of physics of things doing stuff, and evolving to do stuff. Meaning and intention — thought to be the defining characteristics of living systems — may then emerge naturally through the laws of thermodynamics and statistical mechanics.
 

One of the most fascinating reads of my past half year.

I recently linked to the a short piece by Pinker on how an appreciation for the second law of thermodynamics might help one come to some peace with the entropy of the world. It's inevitable, so don't blame yourself.

And yet there is something beautiful about life in its ability to create pockets of order and information amidst the entropy and chaos.

A genome, then, is at least in part a record of the useful knowledge that has enabled an organism’s ancestors — right back to the distant past — to survive on our planet. According to David Wolpert, a mathematician and physicist at the Santa Fe Institute who convened the recent workshop, and his colleague Artemy Kolchinsky, the key point is that well-adapted organisms are correlated with that environment. If a bacterium swims dependably toward the left or the right when there is a food source in that direction, it is better adapted, and will flourish more, than one  that swims in random directions and so only finds the food by chance. A correlation between the state of the organism and that of its environment implies that they share information in common. Wolpert and Kolchinsky say that it’s this information that helps the organism stay out of equilibrium — because, like Maxwell’s demon, it can then tailor its behavior to extract work from fluctuations in its surroundings. If it did not acquire this information, the organism would gradually revert to equilibrium: It would die.
 
Looked at this way, life can be considered as a computation that aims to optimize the storage and use of meaningful information. And life turns out to be extremely good at it. Landauer’s resolution of the conundrum of Maxwell’s demon set an absolute lower limit on the amount of energy a finite-memory computation requires: namely, the energetic cost of forgetting. The best computers today are far, far more wasteful of energy than that, typically consuming and dissipating more than a million times more. But according to Wolpert, “a very conservative estimate of the thermodynamic efficiency of the total computation done by a cell is that it is only 10 or so times more than the Landauer limit.”
 
The implication, he said, is that “natural selection has been hugely concerned with minimizing the thermodynamic cost of computation. It will do all it can to reduce the total amount of computation a cell must perform.” In other words, biology (possibly excepting ourselves) seems to take great care not to overthink the problem of survival. This issue of the costs and benefits of computing one’s way through life, he said, has been largely overlooked in biology so far.
 

I don't know if that's true, but it is so elegant as to be breathtaking. What this all leads to is a theory of a new form of evolution, different from the Darwinian definition.

Adaptation here has a more specific meaning than the usual Darwinian picture of an organism well-equipped for survival. One difficulty with the Darwinian view is that there’s no way of defining a well-adapted organism except in retrospect. The “fittest” are those that turned out to be better at survival and replication, but you can’t predict what fitness entails. Whales and plankton are well-adapted to marine life, but in ways that bear little obvious relation to one another.
 
England’s definition of “adaptation” is closer to Schrödinger’s, and indeed to Maxwell’s: A well-adapted entity can absorb energy efficiently from an unpredictable, fluctuating environment. It is like the person who keeps his footing on a pitching ship while others fall over because she’s better at adjusting to the fluctuations of the deck. Using the concepts and methods of statistical mechanics in a nonequilibrium setting, England and his colleagues argue that these well-adapted systems are the ones that absorb and dissipate the energy of the environment, generating entropy in the process.
 

I'm tempted to see an analogous definition of successful corporate adaptation in this, though I'm inherently skeptical of analogy and metaphor. Still, reading a paragraph like this, one can't think of how critical it is for companies to remember the right lessons from the past, and not too many of the wrong ones.

There’s a thermodynamic cost to storing information about the past that has no predictive value for the future, Still and colleagues show. To be maximally efficient, a system has to be selective. If it indiscriminately remembers everything that happened, it incurs a large energy cost. On the other hand, if it doesn’t bother storing any information about its environment at all, it will be constantly struggling to cope with the unexpected. “A thermodynamically optimal machine must balance memory against prediction by minimizing its nostalgia — the useless information about the past,’’ said a co-author, David Sivak, now at Simon Fraser University in Burnaby, British Columbia. In short, it must become good at harvesting meaningful information — that which is likely to be useful for future survival.
 

This theory even offers its own explanation for death.

It’s certainly not simply a matter of things wearing out. “Most of the soft material we are made of is renewed before it has the chance to age,” Meyer-Ortmanns said. But this renewal process isn’t perfect. The thermodynamics of information copying dictates that there must be a trade-off between precision and energy. An organism has a finite supply of energy, so errors necessarily accumulate over time. The organism then has to spend an increasingly large amount of energy to repair these errors. The renewal process eventually yields copies too flawed to function properly; death follows.
 
Empirical evidence seems to bear that out. It has been long known that cultured human cells seem able to replicate no more than 40 to 60 times (called the Hayflick limit) before they stop and become senescent. And recent observations of human longevity have suggested that there may be some fundamental reason why humans can’t survive much beyond age 100.
 

Again, it's tempting to look beyond humans and at corporations, and why companies die out over time. Is there a similar process at work, in which a company's replication of its knowledge introduces more and more errors over time until the company dies a natural death?

I suspect the mechanism at work in companies is different, but that's an article for another time. The parallel that does seem promising is the idea that successful new companies are more likely to emerge in fluctuating nonequilibrium environments, not from gentle and somewhat static ones.

Why are apes skinny and humans fat?

Scientists studied dead humans and bonobos in an effort to understand why humans became the fat primate. What happened when chimps and humans diverged? It's not clear, but the results thousands of years later are.

...humans got fat. Chimps and bonobos are 13 percent skin, and we're only 6 percent skin, but we compensate for that by being up to 36 percent body fat on the high end of average, while bonobos average 4 percent. That's a wildly disproportional fatness differential.
 

From an interview of one of the authors of the paper.

So what happened on the path from common ancestor to Homo sapiens?
One of the things is, you've gotta shift the body around and change the muscle from the forelimbs if you're a quadrupedal ape. Our ancestors—and most apes—can venture into open areas, but they live in forests. They're really tied to having tree cover available, because they get hot.
 
So we developed fat so we could get away from forests?
Compared to the apes, we have less muscle, which is an energy savings, because it's such an expensive tissue. Two important things about the way we store fat: We store it around our buttocks and thighs, but you want to make sure that you're storing fat so it doesn't interfere with locomotion. You don't want it on your feet, for instance. So you concentrate it around the center of gravity. And you also don't want it to interfere with being able to get rid of heat.
 
What was the benefit of having fat down low and weak arms?
If you're moving away from the forest and tree cover, you want to be able to exploit food in a more mosaic habitat that has areas of bush and a few forests around rivers. You want to be able to move into a lot of different areas. So you've gotta get rid of your hair, and really ramp up those sweat glands. Our skin has really been reorganized for a lot of different functions.
 
Do chimps and bonobos not have sweat glands? 
They have sweat glands. They're not really functioning. All primates have eccrine sweat glands in their hands and feet. Monkeys have them on their chests. [But] they're not stimulated by heat.

Game theory of life

In what appears to be the first study of its kind, computer scientists report that an algorithm discovered more than 50 years ago in game theory and now widely used in machine learning is mathematically identical to the equations used to describe the distribution of genes within a population of organisms. Researchers may be able to use the algorithm, which is surprisingly simple and powerful, to better understand how natural selection works and how populations maintain their genetic diversity.

By viewing evolution as a repeated game, in which individual players, in this case genes, try to find a strategy that creates the fittest population, researchers found that evolution values both diversity and fitness.

Some biologists say that the findings are too new and theoretical to be of use; researchers don’t yet know how to test the ideas in living organisms. Others say the surprising connection, published Monday in the advance online version of the Proceedings of the National Academy of Sciences, may help scientists understand a puzzling feature of natural selection: The fittest organisms don’t always wipe out their weaker competition. Indeed, as evidenced by the menagerie of life on Earth, genetic diversity reigns.

Fascinating. It's tempting to try to imagine where the value of both fitness and diversity might extend outside of genetics. Clearly it has value in finance in portfolio theory; perhaps it matters in organizations, too? Personal ideology? Friend selection? Team construction?