Respecting the preferences of the poor

One feature, in particular, stands out. The life of the rural poor is extremely boring, with repetitive back-breaking tasks interrupted by periods of enforced idleness; it is far removed from Marie-Antoinettish idylls of Arcadia. As the authors remark, villages do not have movie theatres, concert halls, places to sit and watch interesting strangers go by and frequently not even a lot of work. This may sound rather demeaning to the poor, like Marx's comment about “the idiocy of rural life”.
 
But it is important to understand because, as the authors remark, “things that make life less boring are a priority for the poor”. They tell the story of meeting a Moroccan farmer, Oucha Mbarbk. They ask him what would he do if he had a bit more money. Buy some more food, came the reply. What would he do if he had even more money? Buy better, tastier food. “We were starting to feel very bad for him and his family when we noticed a television, a parabolic antenna and a DVD player.” Why had he bought all this if he didn't have enough money for food? “He laughed and said ‘Oh, but television is more important than food.'”
 
Nutritionists and aid donors often forget this. To them, it is hard to imagine anything being more important than food. And the poorer you are, surely, the more important food must be. So if people do not have enough, it cannot be because they have chosen to spend the little they have on something else, such as a television, a party, or a wedding. Rather it must be because they have nothing and need help. Yet well-intentioned programmes often break down on the indifference of the beneficiaries. People don't eat the nutritious foods they are offered, or take their vitamin supplements. They stick with what makes life more bearable, even if it is sweet tea and DVDs.
 

From a piece at the Economist kicking off a discussion of the book Poor Economics by Abhijit Banerjee and Esther Duflo. When people throw around the phrase “first world problem” the presumption is that the poor are so many rungs down on Maslow's Hierarchy of Needs that they couldn't possibly have the mindshare to contemplate such a frivolous dilemma. In fact, though, the marginal value of something we consider frivolous may be greater for the poor than for the wealthy. This has been one of the greatest breakthroughs in my understanding of the poor and how they think about where to allocate their next dollar.

I've written about this previously in The Psychological Poverty Trap and The Persistence of Poverty. The latter looked at the work of Charles Karelis, who believes our economic models of the poor are broken.

When we're poor, Karelis argues, our economic worldview is shaped by deprivation, and we see the world around us not in terms of goods to be consumed but as problems to be alleviated. This is where the bee stings come in: A person with one bee sting is highly motivated to get it treated. But a person with multiple bee stings does not have much incentive to get one sting treated, because the others will still throb. The more of a painful or undesirable thing one has (i.e. the poorer one is) the less likely one is to do anything about any one problem. Poverty is less a matter of having few goods than having lots of problems.
 
Poverty and wealth, by this logic, don't just fall along a continuum the way hot and cold or short and tall do. They are instead fundamentally different experiences, each working on the human psyche in its own way. At some point between the two, people stop thinking in terms of goods and start thinking in terms of problems, and that shift has enormous consequences. Perhaps because economists, by and large, are well-off, he suggests, they've failed to see the shift at all.
 
If Karelis is right, antipoverty initiatives championed all along the ideological spectrum are unlikely to work - from work requirements, time-limited benefits, and marriage and drug counseling to overhauling inner-city education and replacing ghettos with commercially vibrant mixed-income neighborhoods. It also means, Karelis argues, that at one level economists and poverty experts will have to reconsider scarcity, one of the most basic ideas in economics.
 

Karelis' thinking is summarized in his book The Persistence of Poverty: Why the Economics of the Well-Off Can't Help the Poor.

Karelis' ideas are one possible explanation behind the effectiveness of Sam Tsemberis' approach towards solving chronic homelessness. Pathways to Housing, the organization Tsemberis founded, believes in giving the homeless housing first, no strings attached, rather than forcing the homeless to jump through a series of hoops before they qualify.

Housing First was developed to serve the chronically homelessness who suffer from serious psychiatric disabilities and addictions. Traditionally, the chronically homeless live in a cycle of surviving on the street, being admitted to hospitals, shelters, or jails and then going back to the street. The stress of surviving each day in this cycle puts a tremendous amount of pressure on the individual’s psychiatric and physical health. “Living in the street,” one Pathways to Housing client said, “It makes you crazy.”
 
The traditional structures in place to “help” the homeless population often make things worse, particularly for those who suffer from mental illness. Shelters and transitional living programs often require people to pass sobriety tests and other hurdles before they can be considered for housing programs. Housing is considered a reward for good behavior instead of a tool to help stabilize a homeless-person’s mental health. This attitude cuts out the people who need the support the most, effectively punishing them for their conditions. 
 

Respecting the preferences of the poor means understanding that the logic behind many of their purchase decisions may be very rational under a happiness-maximization framework. That we judge them to be otherwise is more a failure of empathy than anything else.

Science is hard

Taken together, headlines like these might suggest that science is a shady enterprise that spits out a bunch of dressed-up nonsense. But I’ve spent months investigating the problems hounding science, and I’ve learned that the headline-grabbing cases of misconduct and fraud are mere distractions. The state of our science is strong, but it’s plagued by a universal problem: Science is hard — really fucking hard.
 
If we’re going to rely on science as a means for reaching the truth — and it’s still the best tool we have — it’s important that we understand and respect just how difficult it is to get a rigorous result. I could pontificate about all the reasons why science is arduous, but instead I’m going to let you experience one of them for yourself. Welcome to the wild world of p-hacking.
 

A very important piece at 538.com on p-values and the likely prevalence of p-hacking.

The p-value reveals almost nothing about the strength of the evidence, yet a p-value of 0.05 has become the ticket to get into many journals. “The dominant method used [to evaluate evidence] is the p-value,” said Michael Evans, a statistician at the University of Toronto, “and the p-value is well known not to work very well.”
 
...
 
But that doesn’t mean researchers are a bunch of hucksters, a la LaCour. What it means is that they’re human. P-hacking and similar types of manipulations often arise from human biases. “You can do it in unconscious ways — I’ve done it in unconscious ways,” Simonsohn said. “You really believe your hypothesis and you get the data and there’s ambiguity about how to analyze it.” When the first analysis you try doesn’t spit out the result you want, you keep trying until you find one that does. (And if that doesn’t work, you can always fall back on HARKing — hypothesizing after the results are known.)
 

The larger lessons apply not just to science. Journalism is hard, especially investigative journalism. You can spend months reporting a piece only to find no real striking narrative, no clear conclusions of note. And yet, if you have to fill a certain number of pages every day...

In tech, really successful and/or counterintuitive A/B test results are passed around like koans. However, anyone who has done enough A/B testing in the tech world knows that most experiments show no statistically significant results. To design a test that won't show the obvious and that will reveal some hidden truth is not easy.

All data suggests most of us should hold our unproven beliefs more loosely than we're inclined to. Who first came up with the saying “Strong opinions, weakly held” (sometimes “loosely” is substituted). Most of us are good at the first half, not so good at the second, a dangerous combination when it turns out that truth is low yield.

Some things might help. One is something of a reference that is a collection of links to all studies that have tried to answer a particular question along with a summary of the current state of thinking. For example, does drinking a glass of red wine a day improve your health? Why are Americans obese? Does eating a multivitamin every day really do anything for your health? What's the best exercise to improve core strength? And so on. Imagine something like the genetic offspring of Vox, Wikipedia, and Richard Feynman.

Another is something like Github but for research data from all these studies. 538's small experiment widget in this piece was a simplified example of the type of tool that might enable more people to get experience and a deeper understanding of the craft of designing studies and the slippery nature of truth. Also, the more people that can analyze a data set, the greater the likelihood that biases of different types balance each other out and that mistakes are caught. Strong hypotheses can often lead one to control for the very variable that explains a result.

The web is so sprawling, information so infinite now, we need more structured ways to traverse it intelligibly. It's no coincidence one of the words that's entered our vocabulary this past year is “explainer” (here is an explainer on the term explainer). We have so much flow, we need more stock.

Damning with painterly praise

That’s the kind of widget The Man From U.N.C.L.E. is: so good it’s practically defective.
 
This wasn’t something I wanted to see. The posters promise Armie Hammer and Henry Cavill, two actors who are like day-old bread. You practically have to give them away. They look like they’ve been attacked by a stylist from the fall issue of any men’s magazine. Down at the bottom of the poster, standing in front of an Aston Martin and looking like a flight attendant vacationing in a Paul Bowles novel, is Alicia Vikander, a Swede who’ll be shoved in our faces until we love her. Also, and not for nothing: This is a remake of a spy show that ran for four seasons on NBC near the height of the Cold War, a film version of which has been failing to launch for decades. Exactly no one was asking for this.
 
So it’s a surprise to discover that the bar for this movie is low enough to conga under. Ritchie has gotten everyone to agree not to take any of this seriously, including the person responsible for keeping an eye on Hammer’s Russian accent.
 

Wesley Morris on The Man from U.N.C.L.E. Here he in the same article on a movie I've never heard of, Cop Car.

From the car emerges Kevin Bacon, with a graying, rusty mustache and nary a line of dialogue, looking every bit the hypothetical adult outcome of Sam Elliott’s decision to do in vitro with himself.
 

When Grantland first arrived on the scene, I could read every article published on the site. Today I can barely keep up with a fraction of what's there, but it's still a wellspring of great writing.

Unfortunately, I have a sinking feeling that despite its popularity, new media economics put Grantland in no man's land: not targeted enough to be a one-man niche, not large enough to collect enough tax revenue to survive as an independent country. In barbell economics, the one in the middle is left holding a lot of weight.

I take solace in the fact that most of the talent there will always find work.

The paradox of writing

Rebecca Mead with a beautiful piece on the movie The End of the Tour, based on the book Although Of Course You End Up Becoming Yourself, about a road trip writer David Lipsky took with David Foster Wallace when Lipsky was working on a profile for Rolling Stone. Emphasis mine; that last sentence is so gorgeous I can't stop reading it over and over.

The movie ends before the article can appear in Rolling Stone, so the relationship between Wallace and Lipsky that it represents is all preamble, no aftermathAnd, in fact, the proposed article didn’t ever appear in Rolling Stone: according to Lipsky, in the afterword of his book, Wenner changed his mind about wanting it before it was even written. It was not until after Wallace’s suicide, in 2008, that Lipsky wrote up his notes into a long, award-winning article about the author; his book, which consists mostly of transcripts of their conversations, followed. (A meta-narrative of betrayal has, nonetheless, unfolded: David Foster Wallace’s widow and his estate have strenuously objected to the film, insisting that Wallace would never have wished the magazine interviews to be used this way.)
 
In “Although Of Course You End Up Becoming Yourself,” Lipsky writes that he was relieved by Wenner’s fiat that he shouldn’t write the piece, rather than experiencing it as a loss somewhere on the scale between devastating and irritating—the usual range of feelings available to a journalist upon having a piece killed. “I tried to write it, and kept imagining David reading it, and seeing through it, through me, and spotting some questionable stuff on the X-ray,” he writes. Lipsky was too lingeringly attached to the period of intimacy—of having momentarily befriended Wallace—to attain the necessary detachment to reshape that experience into a story. Given that, it’s probably just as well he didn’t have to write it; it wouldn’t have been a success. Any reporter may fleetingly fall in love with his or her subject during the process of researching a magazine profile—the singular dance chronicled by “The End of the Tour.” But for the work to be any good, the writer’s greatest libidinal pleasure must be discovered afterward: when the back-and-forth is over, and the recorder has stopped recording, and one is alone at the keyboard at last.
 

It's such a delicate balance. You need a very real and true interest in the subject to do it justice, and yet when you finally go to write the piece, you need the professionalism to retreat from your biases, desires, ego, your very self, and do the subject justice.

It's such a tricky dance, and it's a balance I failed to find in both the NYTimes piece on Amazon this past Saturday and many of the responses from current and former employees. I was all ready to contribute my thoughts on the controversy here, I have hundreds of words in draft form, but I decided to put them on ice for a few days, to see if I might achieve some zen-like distance from which to edit myself.

I've recently taken a few baby steps into meditation, and on a flight today I reread Marcus Aurelius' Meditations. It's a coincidence that both share the word meditation, but both have much to offer in finding a path to that productive and clear-headed place from which to write well. It's love that starts you in the right direction, but it doesn't get you all the way there.

Tablets are mostly for consumption

While there is nothing inherently wrong with a long upgrade cycle, as seen with the Mac, which continues to report solid sales momentum, the reasoning behind holding on to tablets for years is much more troubling. There are currently approximately 3 million units of the original iPad still in use, or 20% of the devices Apple sold. For the iPad 2, it is possible that close to 60% of the units Apple sold are still being used. These two devices are not superior tablets. The initial iPad lacks a camera, while the iPad 2 has a mediocre camera. When compared to the latest iPads, these first two iPads are simply inferior tablets with slow processors, heavy form factors, and inferior screens. But none of that matters with owners. This is problematic and quite concerning, suggesting that many of these tablets are just being used for basic consumption tasks like video and web surfing and not for the productivity and content creation tools that Apple has been marketing. 
 
There are signs that Apple believes there may be some kind of iPad revival around the corner. Since the average iPad upgrade cycle is three years and counting, does this mean that Apple may benefit from some sort of upgrade cycle? I'm skeptical.  Why would someone upgrade an iPad that is just being used to watch video?
 

From Neil Cybart at Above Avalon. iPad defenders used to push back anytime anyone said that the device was just for consumption, highlighting people who used iPads to write music, sketch, or edit. I was always skeptical that such use was common since I was using the iPad primarily to read email and books, browse the web, skim social network feeds, and watch video. Lacking a physical keyboard, it was cumbersome to type on, and that is my primary creative activity. It was too large to carry with me everywhere, and as the iPhone grew larger, it offered most of what I needed from my iPad.

It turns out that's mostly what most people do on their iPads. I dig my iPad, don't get me wrong, but it is more of a “niche” product than the iPhone or even the MacBooks and MacBook Pros (I put niche in quotes because Apple sold 84 million of them in the first two years of its existence; that was a good-sized niche that Apple filled quickly). For me my iPad is a bit of a luxury: it's lighter than my laptop and has superior battery life, and it has a larger screen than my iPhone but can be held with one hand most of the time. So it turns out to be a great device for using while lying in bed reading books or watching video. If I had to give up one of my three devices, though, no doubt the iPad would be low device on that totem pole.

I agree with Cybart that the rumored iPad Pro at least offers a possibility of differentiation. The larger screen size alone may open some use cases. I just don't know what those might be and if they'll be compelling. Perhaps if it's called Pro it's intended from the start for a niche audience, offering them a reason to upgrade. Cybart suggests a haptic keyboard could open up typing as a more popular mode of creation on iPads, but as I've never tried a haptic keyboard, I'm skeptical I'd enjoy it as much as a physical keyboard. I'd love to be proven wrong, but I have a whole life's worth of typing on a physical keyboard bearing witness to the defendant.

Another rumor has a Force Touch-compatible stylus shipping with the iPad Pro. That, along with the larger screen size, could open up some more illustration use cases, stealing share from Wacom tablets. Rumored split screen options open up multi-app interaction as a use case. Still, none of those sounds like a mass market, especially given what will likely be a higher price point for the Pro.

Ultimately, I'm not sure it matters if the iPad is a mass market device. If the new Macbook continues to evolve and steal share from iPads on one end while the iPhone to steals share on the smaller screen size quadrant, Apple still captures the sale and the customer. I'm sure they wouldn't love to lose share to cheap, Android tablets, but if that's where some of the market goes, I'm not sure Apple would care to follow with any sense of urgency.