Thermodynamic theory of evolution

The teleology and historical contingency of biology, said the evolutionary biologist Ernst Mayr, make it unique among the sciences. Both of these features stem from perhaps biology’s only general guiding principle: evolution. It depends on chance and randomness, but natural selection gives it the appearance of intention and purpose. Animals are drawn to water not by some magnetic attraction, but because of their instinct, their intention, to survive. Legs serve the purpose of, among other things, taking us to the water.
 
Mayr claimed that these features make biology exceptional — a law unto itself. But recent developments in nonequilibrium physics, complex systems science and information theory are challenging that view.
 
Once we regard living things as agents performing a computation — collecting and storing information about an unpredictable environment — capacities and considerations such as replication, adaptation, agency, purpose and meaning can be understood as arising not from evolutionary improvisation, but as inevitable corollaries of physical laws. In other words, there appears to be a kind of physics of things doing stuff, and evolving to do stuff. Meaning and intention — thought to be the defining characteristics of living systems — may then emerge naturally through the laws of thermodynamics and statistical mechanics.
 

One of the most fascinating reads of my past half year.

I recently linked to the a short piece by Pinker on how an appreciation for the second law of thermodynamics might help one come to some peace with the entropy of the world. It's inevitable, so don't blame yourself.

And yet there is something beautiful about life in its ability to create pockets of order and information amidst the entropy and chaos.

A genome, then, is at least in part a record of the useful knowledge that has enabled an organism’s ancestors — right back to the distant past — to survive on our planet. According to David Wolpert, a mathematician and physicist at the Santa Fe Institute who convened the recent workshop, and his colleague Artemy Kolchinsky, the key point is that well-adapted organisms are correlated with that environment. If a bacterium swims dependably toward the left or the right when there is a food source in that direction, it is better adapted, and will flourish more, than one  that swims in random directions and so only finds the food by chance. A correlation between the state of the organism and that of its environment implies that they share information in common. Wolpert and Kolchinsky say that it’s this information that helps the organism stay out of equilibrium — because, like Maxwell’s demon, it can then tailor its behavior to extract work from fluctuations in its surroundings. If it did not acquire this information, the organism would gradually revert to equilibrium: It would die.
 
Looked at this way, life can be considered as a computation that aims to optimize the storage and use of meaningful information. And life turns out to be extremely good at it. Landauer’s resolution of the conundrum of Maxwell’s demon set an absolute lower limit on the amount of energy a finite-memory computation requires: namely, the energetic cost of forgetting. The best computers today are far, far more wasteful of energy than that, typically consuming and dissipating more than a million times more. But according to Wolpert, “a very conservative estimate of the thermodynamic efficiency of the total computation done by a cell is that it is only 10 or so times more than the Landauer limit.”
 
The implication, he said, is that “natural selection has been hugely concerned with minimizing the thermodynamic cost of computation. It will do all it can to reduce the total amount of computation a cell must perform.” In other words, biology (possibly excepting ourselves) seems to take great care not to overthink the problem of survival. This issue of the costs and benefits of computing one’s way through life, he said, has been largely overlooked in biology so far.
 

I don't know if that's true, but it is so elegant as to be breathtaking. What this all leads to is a theory of a new form of evolution, different from the Darwinian definition.

Adaptation here has a more specific meaning than the usual Darwinian picture of an organism well-equipped for survival. One difficulty with the Darwinian view is that there’s no way of defining a well-adapted organism except in retrospect. The “fittest” are those that turned out to be better at survival and replication, but you can’t predict what fitness entails. Whales and plankton are well-adapted to marine life, but in ways that bear little obvious relation to one another.
 
England’s definition of “adaptation” is closer to Schrödinger’s, and indeed to Maxwell’s: A well-adapted entity can absorb energy efficiently from an unpredictable, fluctuating environment. It is like the person who keeps his footing on a pitching ship while others fall over because she’s better at adjusting to the fluctuations of the deck. Using the concepts and methods of statistical mechanics in a nonequilibrium setting, England and his colleagues argue that these well-adapted systems are the ones that absorb and dissipate the energy of the environment, generating entropy in the process.
 

I'm tempted to see an analogous definition of successful corporate adaptation in this, though I'm inherently skeptical of analogy and metaphor. Still, reading a paragraph like this, one can't think of how critical it is for companies to remember the right lessons from the past, and not too many of the wrong ones.

There’s a thermodynamic cost to storing information about the past that has no predictive value for the future, Still and colleagues show. To be maximally efficient, a system has to be selective. If it indiscriminately remembers everything that happened, it incurs a large energy cost. On the other hand, if it doesn’t bother storing any information about its environment at all, it will be constantly struggling to cope with the unexpected. “A thermodynamically optimal machine must balance memory against prediction by minimizing its nostalgia — the useless information about the past,’’ said a co-author, David Sivak, now at Simon Fraser University in Burnaby, British Columbia. In short, it must become good at harvesting meaningful information — that which is likely to be useful for future survival.
 

This theory even offers its own explanation for death.

It’s certainly not simply a matter of things wearing out. “Most of the soft material we are made of is renewed before it has the chance to age,” Meyer-Ortmanns said. But this renewal process isn’t perfect. The thermodynamics of information copying dictates that there must be a trade-off between precision and energy. An organism has a finite supply of energy, so errors necessarily accumulate over time. The organism then has to spend an increasingly large amount of energy to repair these errors. The renewal process eventually yields copies too flawed to function properly; death follows.
 
Empirical evidence seems to bear that out. It has been long known that cultured human cells seem able to replicate no more than 40 to 60 times (called the Hayflick limit) before they stop and become senescent. And recent observations of human longevity have suggested that there may be some fundamental reason why humans can’t survive much beyond age 100.
 

Again, it's tempting to look beyond humans and at corporations, and why companies die out over time. Is there a similar process at work, in which a company's replication of its knowledge introduces more and more errors over time until the company dies a natural death?

I suspect the mechanism at work in companies is different, but that's an article for another time. The parallel that does seem promising is the idea that successful new companies are more likely to emerge in fluctuating nonequilibrium environments, not from gentle and somewhat static ones.