The persistence of poverty

A long-standing economics puzzle is why people who are less well off engage in behaviors that cement them to that condition: dropping out of school, doing drugs, committing crimes, having children in their teen years. By the law of marginal utility, the benefits of going to college, for example, would be worth far more to someone in poverty than someone really well off.

Charles Karelis believes it's because our economic models of poverty are incorrect.

When we're poor, Karelis argues, our economic worldview is shaped by deprivation, and we see the world around us not in terms of goods to be consumed but as problems to be alleviated. This is where the bee stings come in: A person with one bee sting is highly motivated to get it treated. But a person with multiple bee stings does not have much incentive to get one sting treated, because the others will still throb. The more of a painful or undesirable thing one has (i.e. the poorer one is) the less likely one is to do anything about any one problem. Poverty is less a matter of having few goods than having lots of problems.

Poverty and wealth, by this logic, don't just fall along a continuum the way hot and cold or short and tall do. They are instead fundamentally different experiences, each working on the human psyche in its own way. At some point between the two, people stop thinking in terms of goods and start thinking in terms of problems, and that shift has enormous consequences. Perhaps because economists, by and large, are well-off, he suggests, they've failed to see the shift at all.

If Karelis is right, antipoverty initiatives championed all along the ideological spectrum are unlikely to work - from work requirements, time-limited benefits, and marriage and drug counseling to overhauling inner-city education and replacing ghettos with commercially vibrant mixed-income neighborhoods. It also means, Karelis argues, that at one level economists and poverty experts will have to reconsider scarcity, one of the most basic ideas in economics.

Karelis theory has interesting implications for welfare programs. Rather than robbing the poor of their motivation to work, the primary concern of many welfare critics, welfare programs can shrink the list of problems faced by the poor, creating a greater incentive to work.

Much of Karelis' ideas are based on intuition rather than data, so his work has come under its share of criticism. His ideas are covered in depth in his book The Persistence of Poverty: Why the Economics of the Well-Off Can't Help the Poor.

Does the marginal utility curve slope the other way in poverty? The idea is an interesting one. I'm reminded of something I heard once which has always felt true: being rich doesn't necessarily make you happy, but being poor can make you unhappy. Karelis' idea deserves more empirical stress testing.

ADDENDUM: Professor Karelis wrote me a note after reading this post. I'll tack it on here at the bottom. I agree that the idea that those in painful situations might become more risk-loving rather than risk-averse feels very intuitive. I don't think you need to have lived in poverty to understand the impulse, either. Anyone who has taken a few bad beats at the poker or blackjack table and then started pressing has hoisted themselves off the optimal risk-reward curve in a fit of emotion.

Thanks for blogging my book on poverty. I couldn't figure out how to comment on your post so am trying this route. There has been empirical work supporting my theory. Here is one reference, from October 2010 journal Frontiers in Neuroscience. Experimental subjects were (as I predicted, without knowing about the lab work) risk loving when they started in pain and were confronted with the choice of remaining in their original state and taking a bet that would either alleviate their pain by a certain amount or make it worse by that amount. I have to say I consider that pretty obvious and unsurprising, and as I argued in my book it has only escaped economists because the accidents of intellectual history caused them to pose the question in a misleading way. 
Regards, Charles

The 130 million pixel camera

We all have them. Forget Apple's, the original retina display is still the best: the human eye.

The article is fascinating throughout. For example, the focal length of lens that best approximates human vision is not 50mm, as is commonly supposed, but 43mm. Its aperture is roughly f3.2 to f3.5. Since the human retina is curved, it is sharper in the corners than a camera sensor, which is flat and causes the corners of the sensor to be further away from the center. Of the human eyes' roughly 130 million pixels, only 6 million see color.

We are still waiting for some new type of connector or bus that will allow us to use retina displays larger than those on Macbook Pros today. The amount of data to transmit is beyond that of the existingThunderbolt connectors.

So how does your brain deal with 130 million pixels of information being thrown at it in a constant stream? The answer is it doesn't.

The subconscious brain also rejects a lot of the incoming bandwidth, sending only a small fraction of its data on to the conscious brain. You can control this to some extent: for example, right now your conscious brain is telling the lateral geniculate nucleus “send me information from the central vision only, focus on those typed words in the center of the field of vision, move from left to right so I can read them”. Stop reading for a second and without moving your eyes try to see what’s in your peripheral field of view. A second ago you didn’t “see” that object to the right or left of the computer monitor because the peripheral vision wasn’t getting passed on to the conscious brain.

If you concentrate, even without moving your eyes, you can at least tell the object is there. If you want to see it clearly, though, you’ll have to send another brain signal to the eye, shifting the cone of visual attention over to that object. Notice also that you can’t both read the text and see the peripheral objects — the brain can’t process that much data.

The brain isn’t done when the image has reached the conscious part (called the visual cortex). This area connects strongly with the memory portions of the brain, allowing you to ‘recognize’ objects in the image. We’ve all experienced that moment when we see something, but don’t recognize what it is for a second or two. After we’ve recognized it, we wonder why in the world it wasn’t obvious immediately. It’s because it took the brain a split second to access the memory files for image recognition. (If you haven’t experienced this yet, just wait a few years. You will.)

ADDENDUM: The way human vision works, always putting the center of your vision in focus and blurring the edges so as to avoid overwhelming your brain with data, is somewhat replicated in form by these hyperphotos. That is, you are presented a photo with some baseline of resolution, but as you drill in on particular sections, the photo zooms and increases the resolution.

Acquerello carnaroli rice

If you are making risotto, accept no substitutes for Acquerello carnaroli rice. Many use arborio rice, but carnaroli has an even shorter grain, and Acquerello ages their rice for a year and then seals it from the moisture in the can until you are ready to use it.

In true risotto, you should taste the integrity of each rice grain. Stirring too vigorously shatters grains and produces porridge. Not bad, but not risotto.

I learned this and some other useful tips in a cooking class with Chef Thomas McNaughton at Flour + Water on Monday night. I remade the risotto again tonight, and it came out great. An easy dinner party centerpiece as the preparation is not strenuous, and you and your guests can sip some wine while you stir.

Baumol's Cost Disease

It may not seem like an honor to have a term like "cost disease" named after you, but William Baumol's new book The Cost Disease is one of the more concise, enlightening economics books I've read recently.

Baumol's thesis is that certain service sectors, most notably healtchare and education, are doomed to outpace inflation because they are so dependent on labor. 

Is there hope for education costs coming down? Will Harvard and other universities with massive endowments decide to subsidize higher education? Unlikely. But disruption tends to not come from incumbents, so we wouldn't look there anyway.

Perhaps education cost disruption comes from something like online education. As Alex Tabarrok writes, online education gives teachers massive leverage.

In 2009, I gave a TED talk on the economics of growth. Since then my 15 minute talk has been watched nearly 700,000 times. That is far fewer views than the most-watched TED talk, Ken Robinson’s 2006 talk on how schools kill creativity, which has been watched some 26 million times. Nonetheless, the 15 minutes of teaching I did at TED dominates my entire teaching career: 700,000 views at 15 minutes each is equivalent to 175,000 student-hours of teaching, more than I have taught in my entire offline career.[1] Moreover, the ratio is likely to grow because my online views are increasing at a faster rate than my offline students.
Teaching students 30 at a time is expensive and becoming relatively more expensive. Teaching is becoming relatively more expensive for the same reason that butlers have become relatively more expensive–butler productivity increased more slowly than productivity in other fields, so wages for butlers rose even as their output stagnated; as a result, the opportunity cost of butlers increased. The productivity of teaching, measured in, say, kilobytes transmitted from teacher to student per unit of time, hasn’t increased much. As a result, the opportunity cost of teaching has increased, an example of what's known as Baumol’s cost disease. Teaching has remained economic only because the value of each kilobyte transmitted has increased due to discoveries in (some) other fields. Online education, however, dramatically increases the productivity of teaching. As my experience with TED indicates, it’s now possible for a single professor to teach more students in an afternoon than was previously possible in a lifetime.

I don't think online universities will ever adequately replace attending a university in person for certain things (socialization, live human feedback, and the signaling value of a degree from an actual university have tangible value), but I've taken several online courses and for certain subject matters they are more than adequate at transmitting knowledge.

Baumol argues that we shouldn't panic as much about the rising costs in healthcare and education because we're saving a lot of money in areas which aren't as dependent on labor, but that doesn't mean we shouldn't take a hard look at how to keep the costs in both of those areas down.

In healthcare, consumers are so removed from the actual cost side of the equation that it doesn't function much like an efficient marketplace at all. I see the doctor, I pay my copay of $15, and then when the bill comes out I have no idea whether I was given a good deal or not, I just hope my insurance covers as much of it as possible.

As for the value of what I receive from the healthcare industry, it's extremely difficult to gauge. Early in life, it tends to be very binary what I want. Cure my sinus infection. Fix my broken leg. Reconstruct my ACL. At the end of my life, the value equation shifts dramatically; still difficult to value, but in a completely different way. How do you quantify the value of an additional month of life? An additional year? Three years? And can you assign the proper amount of credit to the physician for

One reason Baumol's Cost Disease is more prevalent than it would otherwise be is that wages tend to be sticky. This was hammered home recently in the story of Hostess, which, in the face of declining sales, asked their worker unions to accept pay cuts, which the unions refused. That the executives had awarded themselves pay raises or that the root of the issue is really that no one eats Ding Dongs and Twinkies anymore doesn't negate the point that wages rarely go down in the real world, as they might in a truly efficient marketplace. I've actually never been at a company where any employees were asked to take a pay cut. Generally companies just freeze the salary of low performers, and that's enough of a signal that folks move on.

I find new-fangled labor marketplaces like TaskRabbit and Zaarly intriguing mostly as economic experiments in true wage elasticity. Companies don't generally try to ask people to work for lower wages, that's not a good signal in the recruiting marketplace. Rather, they'll approach it by trying to squeeze more work out of people at the same salary, which is a more subtle approach.

Low end disruption n the tech labor marketplace can happen, though it's most likely if initiated by the laborers themselves. In practice we call these people interns.