The Uncertainty of Risk

The startup economy is an example of an antifragile system rooted in optionality. Venture capitalists power the tech scene by making investments in nascent firms. These upfront costs are relatively small and capped. VC firms cannot lose more than they put in. But since there is no upper limit to success, the investment’s upside is potentially unbounded. Of course, venture capitalists need to make smart bets, but the business model doesn’t require them to be so good at predicting winners as to pick one every time. The payoffs from their few wildly successful investments more than make up for the capital lost to failures. While each startup is individually fragile, the startup economy as a whole is highly antifragile, and predictive accuracy is less important. Since the losses are finite and the gains are practically limitless, the antifragile startup economy benefits overall from greater variability in the success of new firms.

From a book review of Nate Silver's The Signal and the Noise, Nassim Nicholas Taleb's Antifragile, and James Owen Weatherall's The Physics of Wall Street at n+1.​

As the financial crises of the past three decades have painfully demonstrated, the global banking system is dangerously fragile. Financial institutions are so highly leveraged and so opaquely intertwined that the contagion from a wrong prediction (e.g. that housing prices will continue to rise) can quickly foment systemic crisis as debt payments balloon and asset values shrivel. When the credit markets lock up and vaunted banks are suddenly insolvent, the authorities’ solution has been to shore up underwater balance sheets with cheap government loans. While allowing a few Too Big To Fail banks to use their “toxic assets” as collateral for taxpayer-guaranteed loans makes their individual fiscal positions more robust, all this new debt leaves the market as a whole more fragile, since the financial system is more heavily leveraged and fire-sale mergers consolidate capital and risk into even fewer institutions. These “solutions” to past crises transferred fragility from the individual banks to the overall financial system, creating the conditions for future collapse.

Too Big To Fail is an implicit taxpayer guarantee for banks that privatizes profits and socializes losses. Markets have internalized this guarantee. The judgment that Too Big To Fail banks are, perversely, less risky is reflected in the lower interest rates that creditors demand on loans and deposits. Recent studies estimate that this government protection translates into an $83 billion annual subsidy to the ten largest American banks. This moral hazard rewards irresponsible risk taking, which management will rationalize ex post facto by claiming that no model could have predicted whatever crash just happened. Being Too Big To Fail means that predictors have no “skin in the game.” In making large bets, they get to keep the upside when their models work, but taxpayer bailouts protect them from market discipline when losses balloon and their possible failure put the overall economy at risk. To promote an antifragile economic system, bankers must be liable for the complex products they produce, each financial institution must be small enough to safely fail, and the amount of debt-financed leverage in the system overall must be reduced. These are the most urgent stakes obscured by the difficult mathematics of financial risk. Markets will never spontaneously adopt these reforms; only political pressure can force them.

The law and self-driving cars

Robocar accidents (and AI and robotics in general) bring a whole new way of looking at the law. Generally, the law exists to deter and punish bad activity done by humans who typically knew the law, and knew they were doing something unsafe or nefarious. It is meaningless to punish robots, but in punishing the people and companies who make them, it will likely be the case that they did everything they could to stay within the law and held no ill will.

If a robocar (or its occupant or maker) ever gets a correct ticket for violating the vehicle code or other law, this will be a huge event for the team who made it. They'll be surprised, and they'll immediately work to fix whatever flaw caused that to happen. While software updates will not be instantaneous, soon that fix will be downloaded to all vehicles. All competitors will check their own systems to make sure they haven't made the same mistake, and they will also fix things if they need to.

As such, all robocars, as a group, will never get the same ticket again.

This is very much unlike the way humans work. When the first human got a ticket for an unsafe lane change, this didn't stop all the other people from making unsafe lane changes. At best, hearing about how expensive the ticket was will put a slight damper on things. Laws exist because humans can't be trusted not to break them, and there must be a means to stop them.

This suggests an entirely different way of looking at the law. Most of the vehicle code is there because humans can't be trusted to follow the general principles behind the code -- to stay safe, and to not unfairly impede the path of others and keep traffic flowing. There are hundreds of millions of drivers, each with their own personalities, motives and regard or disregard for those principles and the law.

In the robocar world, there will probably not be more than a couple of dozen distinct "drivers" in a whole nation. You could literally get the designers of all these systems together in a room, and work out any issues of how to uphold the principles of the road.

Much more here on the legal complexities surrounding self-driving cars.​ I find the topic more interesting than, say, the social implications of Google Glasses. More important, too.

Morality without religion

In his new book, The Bonobo and the Atheist: In Search of Humanism Among the Primates, de Waal challenges this theory, arguing that human morality is older than religion, and indeed an innate quality. In other words, religion did not give us morality. Religion built onto a pre-existing moral system that governed how our species behaved. 

de Waal's argument, which he has been making for years, is strengthened by the fact that recent research is starting to paint a better picture of the kind of cognitive processing that empathy requires. It turns out that empathy is not as complex as we had imagined, and that is why other animals are capable of it as well as humans. 

So if being moral is so easy, can we dispatch with religion altogether?

From Big Think.

But then anyone who has seen the great The Tree of Life​ already knew that empathy predated humans. Remember the dinosaur that spared the other dinosaur? Malick knew it before you did.

The great stagnation of parenting

We’ve come a long way, as a species. And we’re better at many things than we ever were before – not just slightly better, but unimaginably, ridiculously better. We’re better at transporting people and objects, we’re better a killing, we’re better at preventing infectious diseases, we’re better at industrial production, agricultural and economic output, we’re better at communications and sharing of information.

But in some areas, we haven’t made such dramatic improvements. And one of those areas is parenting. We’re certainly better parents than our own great-great-grandparents, if we measure by outcomes, but the difference is of degree, not kind. Why is that?

The post includes a couple theories as to why the labor productivity of parenting has not increased.​

If you accept the premise that parenting is difficult to do well no matter how hard you try, it's worth reading the arguments put forth by Bryan Caplan in his book Selfish Reasons to Have More Kids: Why Being a Great Parent is Less Work and More Fun Than You Think, namely that you should chill out a bit and burn yourself out less trying to be a super-parent. You'll be happier and more stress-free, and your child will probably turn out the same.

(h/t to Tyler Cowen)​