When you come to the 2^100 forks in the road...

In some simple games, it is easy to spot Nash equilibria. For example, if I prefer Chinese food and you prefer Italian, but our strongest preference is to dine together, two obvious equilibria are for both of us to go to the Chinese restaurant or both of us to go to the Italian restaurant. Even if we start out knowing only our own preferences and we can’t communicate our strategies before the game, it won’t take too many rounds of missed connections and solitary dinners before we thoroughly understand each other’s preferences and, hopefully, find our way to one or the other equilibrium.
 
But imagine if the dinner plans involved 100 people, each of whom has decided preferences about which others he would like to dine with, and none of whom knows anyone else’s preferences. Nash proved in 1950 that even large, complicated games like this one do always have an equilibrium (at least, if the concept of a strategy is broadened to allow random choices, such as you choosing the Chinese restaurant with 60 percent probability). But Nash — who died in a car crash in 2015 — gave no recipe for how to calculate such an equilibrium.
 
By diving into the nitty-gritty of Nash’s proof, Babichenko and Rubinstein were able to show that in general, there’s no guaranteed method for players to find even an approximate Nash equilibrium unless they tell each other virtually everything about their respective preferences. And as the number of players in a game grows, the amount of time required for all this communication quickly becomes prohibitive.
 
For example, in the 100-player restaurant game, there are 2&100 ways the game could play out, and hence 2^100 preferences each player has to share. By comparison, the number of seconds that have elapsed since the Big Bang is only about 2^59.
 

Interesting summary of a paper published last year that finds that for many games, there is not clear path to even an approximate Nash equilibrium. I don't know whether this is depressing or appropriate to the state of the world right now, it's probably both. Also, it's great to have mathematical confirmation of the impossibility of choosing where to eat when with a large group.

Regret is a fascinating emotion. Jeff Bezos' story of leaving D.E. Shaw to start Amazon based on a regret minimization framework is now an iconic entrepreneurial myth, and in most contexts people frame regret the same way, as something to be minimized. That is, regret as a negative.

In the Bezos example, regret was a valuable constant to help him come to an optimal decision at a critical fork in his life. Is this its primary evolutionary purpose? Is regret only valuable when we feel its suffocating grip on the human heart so we avoid it in the future? As a decision-making feedback mechanism?

I commonly hear that people regret the things they didn't do more than the things they do. Is that true? Even in this day and age where one indiscretion can ruin a person for life?

In storytelling, regret serves two common narrative functions. One is as the corrosive element which reduces a character, over a lifetime of exposure, to an embittered, cynical drag on those around them. The second is as the catalyst for the protagonist to make a critical life change, of which the Bezos decision is an instance of the win-win variety.

I've seen regret in both guises, and while we valorize regret as life-changing, I suspect the volume of regret that chips away at people's souls outweighs the instances where it changes their lives for the better, even as I have no way of quantifying that. Regardless, I have no contrarian take on minimizing regret for those who suffer from it.

In that sense, this finding on the near impossibility of achieving a Nash equilibrium in complex scenarios offers some comfort. What is life or, perhaps more accurately, how we perceive our own lives but as a series of decisions, compounded across time.

We do a great job of coming up with analogies for how complex and varied the decision tree is ahead of us. The number of permutations of how a game of chess or Go might be played is greater than the number of atoms in the universe, we tell people. But we should do a better job of turning that same analogy backwards in time. If you then factor in the impact of other people in all those forks in the road, across a lifetime, what we see is just as dense a decision tree behind us ahead of us. At any point in time, we are at a node on a tree with so many branches behind it that it exceeds our mind's grasp. Not so many of those branches are so thick as to deserve the heavy burden of regret.

One last tidbit from the piece which I wanted to highlight.

But the two fields have very different mindsets, which can hamper interdisciplinary communication: Economists tend to look for simple models that capture the essence of a complex interaction, while theoretical computer scientists are often more interested in understanding what happens as the models grow increasingly complex. “I wish my colleagues in economics were more aware, more interested in what computer science is doing,” McLennan said.