So I wrote down the simplest model I could think of — a model too simple to give useful numerical cutoffs, but still a starting point — and I learned something surprising. Namely (at least in this very simple model), the harsher the prospective punishment, the laxer you should be about reasonable doubt. Or to say this another way: When the penalty is a year in jail, you should vote to convict only when the evidence is very strong. When the penalty is 50 years, you should vote to convict even when it’s pretty weak.
(The standard here for what you “should” do is this: When you lower your standards, you increase the chance that Mr. or Ms. Average will be convicted of a crime, and lower the chance that the same Mr. or Ms. Average will become a crime victim. The right standard is the one that balances those risks in the way that Mr. or Ms. Average finds the least distasteful.)
Here (I think) is what’s going on: A weak penalty has very little deterrent effect — so little that it’s not worth convicting an innocent person over. But a strong penalty can have such a large deterrent effect that it’s worth tolerating a lot of false convictions to get a few true ones.
Steven Landsburg lands on a counter-intuitive conclusion: you should lower your standards for conviction the harsher the punishment.
It seems as if Landsburg's model argues for convicting any number of people who surpass some lowered threshold of evidence for a crime. Several people all seem like they could have committed a crime, so convict all of them, even if only one could have committed the crime. Perhaps I'm misunderstanding the implications, others can help verify Landsburg's model.
Also, how often are there N people who all seem equally guilty of a crime? I'm at a disadvantage here in not having seen Making a Murderer, but perhaps Landsburg's model here applies equally as well to that case as it does to Serial Season 1.
Let's broaden the conversation and bring in Alex Tabarrok, discussing one area in which fellow economist Gary Becker may have been wrong.
Becker isn’t here to defend himself on the particulars of that evening but you can see the idea in his great paper, Crime and Punishment: An Economic Approach. In a famous section he argues that an optimal punishment system would combine a low probability of being punished with a high level of punishment if caught:
If the supply of offenses depended only on pf—offenders were risk neutral — a reduction in p “compensated” by an equal percentage increase in f would leave unchanged pf…
..an increased probability of conviction obviously absorbs public and private resources in the form of more policemen, judges, juries, and so forth. Consequently, a “compensated” reduction in this probability obviously reduces expenditures on combating crime, and, since the expected punishment is unchanged, there is no “obvious” offsetting increase in either the amount of damages or the cost of punishments. The result can easily be continuous political pressure to keep police and other expenditures relatively low and to compensate by meting out strong punishments to those convicted.
We have now tried that experiment and it didn’t work. Beginning in the 1980s we dramatically increased the punishment for crime in the United States but we did so more by increasing sentence length than by increasing the probability of being punished. In theory, this should have reduced crime, reduced the costs of crime control and led to fewer people in prison. In practice, crime rose and then fell mostly for reasons other than imprisonment. Most spectacularly, the experiment with greater punishment led to more spending on crime control and many more people in prison.
Why did the experiment fail? Longer sentences didn’t reduce crime as much as expected because criminals aren’t good at thinking about the future; criminal types have problems forecasting and they have difficulty regulating their emotions and controlling their impulses. In the heat of the moment, the threat of future punishment vanishes from the calculus of decision. Thus, rather than deterring (much) crime, longer sentences simply filled the prisons. As if that weren’t bad enough, by exposing more people to criminal peers and by making it increasingly difficult for felons to reintegrate into civil society, longer sentences increased recidivism.
It's a great post by Tabarrok. He does give Becker, one of my economics idols, credit.
Let’s give Becker and the rational choice theory its due. When Becker first wrote many criminologists were flat out denying that punishment deterred. As late as 1994, for example, the noted criminologist David Bayley could write:
The police do not prevent crime. This is one of the best kept secrets of modern life. Experts know it, the police know it, but the public does not know it. Yet the police pretend that they are society’s best defense against crime. This is a myth
Inspired by Becker, a large, credible, empirical literature–including my own work on police (and prisons)–has demonstrated that this is no myth, the police deter. Score one for rational choice theory. It’s a far cry, however, from police deter to twenty years in prison deters twice as much as ten years in prison. The rational choice theory was pushed beyond its limits and in so doing not only was punishment pushed too far we also lost sight of alternative policies that could reduce crime without the social disruption and injustice caused by mass incarceration.
The problem with annual reviews in companies is not necessarily with an annual review process but with lack of immediate feedback in between those reviews. The most useful thing I learned from the 10,000 hour rule wasn't that you needed 10,000 hours to become an expert, it was that people improve with deliberate practice if feedback on their work is immediate.
For effective parenting and coaching, shorten the time between performance and feedback, and be consistent.