Lobbying: a great (the greatest?) investment

In a striking infographic, the United Republic shows why lobbying is so pervasive: it's an unbelievably effective form of spending (for now I'm linking to the NYTimes hosted version of the infographic as most of United Republic's pages, including the infographic, are 404'ing on me).

​As the NYTimes article notes, the ROI on lobbying dwarfs that of any investment an ordinary citizen might hope to capture.

According to statistics United Republic assembled, the prescription drug industry spent $116 million lobbying for legislation to prevent Medicare from bargaining down drug prices — legislation that enabled drug companies to make an additional $90 billion annually. That amounts to an extraordinary 77,500 percent return on investment. Oil companies, in turn, had a return on investment of 5,900 percent, and multinational companies, 22,000 percent.

​You're not feeling as hot about those shares of Apple or Google you've held for a few years now, are you?

In fact, the ROI on lobbying is so astronomically high that Tyler Cowen wonders why politician's don't demand larger bribes from lobbyists, or why companies don't spend more on lobbying.​ This disparity between the cost of lobbying and its returns is known as Tullock paradox. It's ironic, isn't it, to ask why government isn't more corrupt than it already is?

In Government's End, Jonathan Rauch predicted this would be the logical wall any democratic government would run into: demosclerosis, or the inability of a democratic government to deal with our deep problems because motivated lobbyists spend billions fighting for and maintaining the status quo. In such a situation, only marginal incremental change is possible.​

If you've ever worked in a large corporation you may also recognize that inertia that comes from entrenched groups defending their turf.​

It would be wonderful if we could simplify our tax code, but the prevalence of lobbying makes it unlikely. So many of the odd tax loopholes and shelters and rules are there specifically because some narrow interest group fought to get them into the tax code.

In fact, my variant of the Tullock paradox is why corporations like Apple still have to shelter foreign income from domestic taxes at all. You'd think they'd have lobbied their way to ways to get that income back home without the IRS laying their hands on much of it at all.​

A barbell option strategy for your craft

Taleb employed a “barbell strategy”—that is, two risk extremes with no medium level. He put a big majority of his money in the safest assets he could find, such as treasury bills or cash. The rest he put into what are called way out of the money options—put options that are massively below the current market price of the stock, or call options that are massively above, and are priced as being extremely improbable events. The strategy is to make sure you could lose all of your money each time without getting wiped out, because you only need to be right once. And indeed, in the 1987 crash Taleb made tens of millions of dollars, and in the 2008 crash he did it again.

There is a similar strategy available to those who would devote themselves to a craft. The heart of Taleb’s philosophy is that you should minimize the downside risk to yourself, while maximizing the potential upside. When it comes to a craft, the best way to accomplish this is to prepare yourself for the possibility that all you will get out of it is the enjoyment of doing something well. Meanwhile, you should be putting your work out on the public web in order to make it possible for it to get a lot of attention—but again, only if you can emotionally prepare yourself for the fact that it probably won’t.

By Adam Gurri over at The Umlaut. Lots of good advice within, the economic lens being a fresh one on the truism to follow one's passion.

A corollary is not living too far beyond one's means so far that you can't even purchase some options. I'm not sure I agree with Gurri that "modern levels of affluence allows people to work a job the market will pay them for and still have time left over to devote to something they genuinely love," at least not as a rule rather than a privileged exception.

When I left my job at Amazon to go to film school, I was starting over, from the bottom rung, but I had the luxury of some savings from having worked a while to ​allow me to focus on filmmaking without having to take out exorbitant loans or work a side job waiting tables.

But many of my classmates graduated with a crippling debt overhang, and that comes with a real cost, both physical and mental. To be, as Taleb put it, antifragile, you need, as a budding filmmaker, to be able to pay your rent and buy enough ramen to keep yourself somewhat healthy, you need health insurance, you need those things that limit your you from catastrophic downside while allowing you some freedom to work on your craft, to purchase those options which might, though the odds are long, cash in.

It sounds so sensible: limit your downside, give yourself a chance at massive upside. And yet we romanticize the story of the long shot who puts it all on the line, has everything to lose, and against the longest of odds achieves massive prosperity. This is, depending on how you look at it, a good thing, giving hope to those who face the longest of odds, or dangerous mythology.​

Peer effects and social policy

When illness in one person is treated or prevented, others to whom that person is connected also benefit.

...

This leads to a problem. Taking network effects seriously means that we should value socially connected people more. From a policy perspective—if not a moral perspective—the connected should get more healthcare attention.

More from Nicholas Christakis here (PDF).​ As Christakis notes, a healthcare system that replaces the current one that grants inexplicit privileges to the wealthy with one that favors the networked might be more just, but the notion makes him uncomfortable.

Without even debating the ethics of such a system, I don't think measuring a person's peer effect multiplier is anywhere near precise enough today. I have nightmarish visions of an angry mob of people waving their Klout scores at the ER waiting room attendant.

The Uncertainty of Risk

The startup economy is an example of an antifragile system rooted in optionality. Venture capitalists power the tech scene by making investments in nascent firms. These upfront costs are relatively small and capped. VC firms cannot lose more than they put in. But since there is no upper limit to success, the investment’s upside is potentially unbounded. Of course, venture capitalists need to make smart bets, but the business model doesn’t require them to be so good at predicting winners as to pick one every time. The payoffs from their few wildly successful investments more than make up for the capital lost to failures. While each startup is individually fragile, the startup economy as a whole is highly antifragile, and predictive accuracy is less important. Since the losses are finite and the gains are practically limitless, the antifragile startup economy benefits overall from greater variability in the success of new firms.

From a book review of Nate Silver's The Signal and the Noise, Nassim Nicholas Taleb's Antifragile, and James Owen Weatherall's The Physics of Wall Street at n+1.​

As the financial crises of the past three decades have painfully demonstrated, the global banking system is dangerously fragile. Financial institutions are so highly leveraged and so opaquely intertwined that the contagion from a wrong prediction (e.g. that housing prices will continue to rise) can quickly foment systemic crisis as debt payments balloon and asset values shrivel. When the credit markets lock up and vaunted banks are suddenly insolvent, the authorities’ solution has been to shore up underwater balance sheets with cheap government loans. While allowing a few Too Big To Fail banks to use their “toxic assets” as collateral for taxpayer-guaranteed loans makes their individual fiscal positions more robust, all this new debt leaves the market as a whole more fragile, since the financial system is more heavily leveraged and fire-sale mergers consolidate capital and risk into even fewer institutions. These “solutions” to past crises transferred fragility from the individual banks to the overall financial system, creating the conditions for future collapse.

Too Big To Fail is an implicit taxpayer guarantee for banks that privatizes profits and socializes losses. Markets have internalized this guarantee. The judgment that Too Big To Fail banks are, perversely, less risky is reflected in the lower interest rates that creditors demand on loans and deposits. Recent studies estimate that this government protection translates into an $83 billion annual subsidy to the ten largest American banks. This moral hazard rewards irresponsible risk taking, which management will rationalize ex post facto by claiming that no model could have predicted whatever crash just happened. Being Too Big To Fail means that predictors have no “skin in the game.” In making large bets, they get to keep the upside when their models work, but taxpayer bailouts protect them from market discipline when losses balloon and their possible failure put the overall economy at risk. To promote an antifragile economic system, bankers must be liable for the complex products they produce, each financial institution must be small enough to safely fail, and the amount of debt-financed leverage in the system overall must be reduced. These are the most urgent stakes obscured by the difficult mathematics of financial risk. Markets will never spontaneously adopt these reforms; only political pressure can force them.

The law and self-driving cars

Robocar accidents (and AI and robotics in general) bring a whole new way of looking at the law. Generally, the law exists to deter and punish bad activity done by humans who typically knew the law, and knew they were doing something unsafe or nefarious. It is meaningless to punish robots, but in punishing the people and companies who make them, it will likely be the case that they did everything they could to stay within the law and held no ill will.

If a robocar (or its occupant or maker) ever gets a correct ticket for violating the vehicle code or other law, this will be a huge event for the team who made it. They'll be surprised, and they'll immediately work to fix whatever flaw caused that to happen. While software updates will not be instantaneous, soon that fix will be downloaded to all vehicles. All competitors will check their own systems to make sure they haven't made the same mistake, and they will also fix things if they need to.

As such, all robocars, as a group, will never get the same ticket again.

This is very much unlike the way humans work. When the first human got a ticket for an unsafe lane change, this didn't stop all the other people from making unsafe lane changes. At best, hearing about how expensive the ticket was will put a slight damper on things. Laws exist because humans can't be trusted not to break them, and there must be a means to stop them.

This suggests an entirely different way of looking at the law. Most of the vehicle code is there because humans can't be trusted to follow the general principles behind the code -- to stay safe, and to not unfairly impede the path of others and keep traffic flowing. There are hundreds of millions of drivers, each with their own personalities, motives and regard or disregard for those principles and the law.

In the robocar world, there will probably not be more than a couple of dozen distinct "drivers" in a whole nation. You could literally get the designers of all these systems together in a room, and work out any issues of how to uphold the principles of the road.

Much more here on the legal complexities surrounding self-driving cars.​ I find the topic more interesting than, say, the social implications of Google Glasses. More important, too.