Mathematics of why hipsters all dress the same

Here comes the crucial twist. In all of the examples so far, we assumed that everyone had instant knowledge of what everyone else was wearing. People knew exactly what the mainstream trend was. But in reality, there are always delays. It takes time for a signal to propagate across a brain; likewise it takes time for hipsters to read Complex or Pitchfork or whatever in order to figure out how to be contrarian.

So Touboul included a delay into the model. People would base their decisions not on the current state of affairs, but on the state of affairs some number of turns prior.

What Touboul noticed is that if you increase the delay factor past a certain point, something amazing happens. Out of what appears to be random noise, a pattern emerges. All of the hipsters start to synchronize, and they start to oscillate in unison.

This mathematician must be an early frontrunner for the Nobel Prize.

In all seriousness, though, the model has a certain explanatory elegance, akin to Schelling's Segregation Model.

Human-level AI always 25 years away

Rodney Brooks says recent fears about malevolent, superintelligent AI are not warranted.

I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.  Recent advances in deep machine learning let us teach our machines things like how to distinguish classes of inputs and to fit curves to time data.  This lets our machines “know” whether an image is that of a cat or not, or to “know” what is about to fail as the temperature increases in a particular sensor inside a jet engine.  But this is only part of being intelligent, and Moore’s Law applied to this very real technical advance will not by itself bring about human level or super human level intelligence.  While deep learning may come up with a category of things appearing in videos that correlates with cats, it doesn’t help very much at all in “knowing” what catness is, as distinct from dogness, nor that those concepts are much more similar to each other than to salamanderness.  And deep learning does not help in giving a machine “intent”, or any overarching goals or “wants”.  And it doesn’t help a machine explain how it is that it “knows” something, or what the implications of the knowledge are, or when that knowledge might be applicable, or counterfactually what would be the consequences of that knowledge being false.  Malevolent AI would need all these capabilities, and then some.  Both an intent to do something and an understanding of human goals, motivations, and behaviors would be keys to being evil towards humans.

...

And, there is a further category error that we may be making here.  That is the intellectual shortcut that says computation and brains are the same thing.  Maybe, but perhaps not.

In the 1930′s Turing was inspired by how “human computers”, the people who did computations for physicists and ballistics experts alike, followed simple sets of rules while calculating to produce the first models of abstract computation. In the 1940′s McCullough and Pitts at MIT used what was known about neurons and their axons and dendrites to come up with models of how computation could be implemented in hardware, with very, very abstract models of those neurons.  Brains were the metaphors used to figure out how to do computation.  Over the last 65 years those models have now gotten flipped around and people use computers as the  metaphor for brains.  So much so that enormous resources are being devoted to “whole brain simulations”.  I say show me a simulation of the brain of a simple worm that produces all its behaviors, and then I might start to believe that jumping to the big kahuna of simulating the cerebral cortex of a human has any chance at all of being successful in the next 50 years.  And then only if we are extremely lucky.

In order for there to be a successful volitional AI, especially one that could be successfully malevolent, it would need a direct understanding of the world, it would need to have the dexterous hands and/or other tools that could out manipulate people, and to have a deep understanding of humans in order to outwit them. Each of these requires much harder innovations than a winged vehicle landing on a tree branch.  It is going to take a lot of deep thought and hard work from thousands of scientists and engineers.  And, most likely, centuries.

As Brooks notes, a study of 95 predictions made since 1950 of when human-level AI will be achieved have all predicted that its arrival in 15-25 years. A quarter century seems like the default time frame of choice for humans for predictions of technological advances that are plausible but not imminent.

Salute to the Cold War game theorists

The new field of game theory had already provoked several re-thinks about nuclear policy in the 1950s and 1960s, and that’s what saved us. In the 1950s, game theorist John von Neumann understood that nuclear weapons imposed an existential crisis on humanity, and required a completely different attitude towards conflict. He developed the doctrine of Mutually Assured Destruction (MAD), based on concepts of rational deterrence and the Nash equilibrium. By the time he died in 1957, he’d taught the Pentagon policy-makers that if the U.S. and the Soviet Union could utterly destroy each other even after one launched a nuclear first strike, there could be no rational incentive for nuclear war. This was the most important insight in applied psychology, ever. You don’t have to feel emotional sympathy with the enemy; you only have to expect that he will act in accordance with his perceived costs, benefits, and risks. The more certain he is that a nuclear assault will result in his own extermination, the less likely he is to launch one.

...

So, even before I was born, the game theorists had tamed the existential threat of nuclear weapons through some simple psychological insights. We must respect the enemy’s rationality even if we cannot sympathize with his ideals. We must understand his costs and benefits even if we don’t share his fears and hopes. We must behave rationally enough that we don’t attack first (which would be suicidal) – yet we must act crazy and vengeful enough that the enemy thinks we’ll retaliate if we’re attacked, even after it would be futile and spiteful (Robert Frank nicely explained this deterrence logic in his classic book Passions within Reason.) And we must understand that if both players do this, both will be safe.

It's not Memorial Day, it's Veterans Day, but this piece on appreciating the Cold War game theorists still felt timely.

Alcohol vs guns

Tyler Cowen:

I would gladly see a cultural shift toward the view that gun ownership is dangerous and undesirable, much as the cultural attitudes toward smoking have shifted since the 1960s.

I am, however, consistent.  I also think we should have a cultural shift toward the view that alcohol — and yes I mean all alcohol — is at least as dangerous and undesirable.  I favor a kind of voluntary prohibition on alcohol.  It is obvious to me that alcohol is one of the great social evils and when I read the writings of the prohibitionists, while I don’t agree with their legal remedies, their arguments make sense to me.  It remains one of the great undervalued social movements.  For mostly cultural reasons, it is now a largely forgotten remnant of progressivism and it probably will stay that way, given that “the educated left” mostly joined with America’s shift to being “a wine nation” in the 1970s.

Guns, like alcohol, have many legitimate uses, and they are enjoyed by many people in a responsible manner.  In both cases, there is an elite which has absolutely no problems handling the institution in question, but still there is the question of whether the nation really can have such bifurcated social norms, namely one set of standards for the elite and another set for everybody else.

In part our guns problem is an alcohol problem.  According to Mark Kleiman, half the people in prison were drinking when they did whatever they did.  (Admittedly the direction of causality is murky but theory points in some rather obvious directions.)  Our car crash problem – which kills many thousands of Americans each year — is also in significant part an alcohol problem.  There are connections between alcohol and wife-beating and numerous other social ills, including health issues of course.

It worries me when people focus on “guns” and do not accord an equivalent or indeed greater status to “alcohol” as a social problem.  Many of those people drink lots of alcohol, and would not hesitate to do so in front of their children, although they might regard owning an AK-47, or showing a pistol to the kids, as repugnant.  I believe they are a mix of hypocritical and unaware, even though many of these same individuals have very high IQs and are well schooled in the social sciences.  Perhaps they do not want to see the parallels.

That's Cowen at his best, and the whole thing is well worth a read.

My younger sister was driving to pick me up at the airport in Chicago years ago and a drunk driver swerved into her Prius and spun her out on the highway in the middle of traffic. Luckily no other car was approaching fast enough to hit her after her car spun out, but the thought of what might have happened is so terrible as to be unthinkable for our entire family. The drunk driver flipped his car over but also survived.

On the one hand, I'm glad new car share services like Uber and Lyft exist now as they probably decrease the likelihood that people drive drunk when it's inconvenient to hail a cab (I've lived in many major cities in the US, and hailing cab was only something I ever counted on in Manhattan).

On the other hand, maybe that is offset to some degree by increased drinking overall. The cultural glorification (or at least forgiveness) of drinking to excess is troubling. We tend to attach some heroic narrative to feats of overconsumption of alcohol, recounting the attendant ridiculous behavior as humorous narratives, when perhaps we should be amplifying the perpetrator's sense of shame to reduce the likelihood of recurrence.