Impact of landscape on time horizon of your thinking

To reach that conclusion Dr van Vugt and his team randomly assigned 47 participants either to look at three city photographs, or three country photographs, for two minutes each. After that participants were asked to pick between €100 ($135) now or a larger sum, which grew in €10 increments up to €170, in 90 days’ time. Those beholding natural landscapes made the switch to deferred gratification at a sum, known as the indifference point, that was 10% below those who scanned cityscapes. The same was true when another 43 volunteers were asked either to walk in an actual forest outside Amsterdam or in the city's commercial area of Zuidas.

 

It turns out our environment, the landscape we're in, may affect the time horizon of our decision-making. It's still not clear why.

What, then, is it about brooks and meadows that propels thoughts of the beyond? Dr van Vugt speculates that competition—for jobs, attractive partners and large bank accounts—is concentrated within cities, rendering them unpredictable. Unpredictability may in turn shunt people onto the fast lane. He admits, however, that the study does not determine whether cities spur impulsive behaviour, or whether the countryside inspires patience. Or, indeed, whether the effect holds for different types of non-urban locale. Sublime deserts or the Arctic tundra may be (as Coleridge himself would be the first to aver). But their inhospitability makes them possibly more unpredictable for their human inhabitants even than bustling Amsterdam.

 

Whatever the reason, for long-term planning, it may indeed pay to get away from it all and escape into nature.

Fundamental attribution error and tech mercenaries

In social psychology, the fundamental attribution error (also known as correspondence bias or attribution effect) describes the tendency to overestimate the effect of disposition or personality and underestimate the effect of the situation in explaining social behavior. The fundamental attribution error is most visible when people explain the behavior of others. It does not explain interpretations of one's own behavior—where situational factors are more easily recognized and can thus be taken into consideration. This discrepancy between attributions for one's own behavior and for that of others is known as the actor–observer bias.

From Wikipedia. I've always heard fundamental attribution error used to describe people. But more and more, I believe it's applicable to companies as well.

That's not to say it isn't also still a problem when applied to people in technology. For example, a common belief is that tech workers in Silicon Valley are more mercenary than in other tech markets like Seattle or New York.

When I moved up to the Bay Area from Los Angeles in mid 2011, I was curious to experience this supposedly venal culture firsthand.

Having worked in all three of those markets over many years now (and also having been at Hulu for three and a half years in Los Angeles), I'm not so sure the people are more mercenary in Silicon Valley. Much of the short tenures in the Bay Area may be explainable by the environment rather than some intrinsic ruthlessness on the part of the people living in the area.

For one thing, the number of tech companies in Silicon Valley just dwarfs that in any of those other tech markets. Sure, fewer people left Amazon than I would have expected , but if you wanted to stay in Seattle, where else would you go? To Microsoft? Real Networks?

The number of startups in the Bay Area also exceeds that of any other tech market by a huge margin. Since startups have such a high failure rate, inevitably it drags down job tenures.

If you control for those two factors, would the Bay Area still rate as having a more mercenary culture? Someone with access to more data would seem to be able to answer this question quite easily (maybe LinkedIn has enough data to run such an analysis). My hypothesis, just based on my personal experience, is that the Bay Area's supposedly mercenary culture is just a technology version of the fundamental attribution error.

If you're operating outside of the Bay Area and feeling fairly secure with your workforce, though, beware. The world is changing, and the fight is coming to your doorstep. For one thing, LinkedIn and other such services have made it easier and easier for other companies all over the place to reach your employees with enticing offers. If you don't think every one of your employees is receiving multiple offers a week, if not per day, from recruiters and headhunters and LinkedIn, you may be living in the 90's.

Think you're safe because your employees don't want to relocate? More and more companies are going to where the people are, opening satellite offices in any market with a good base of talent. Shoppers in many states may not be the only ones lamenting the fact that they now must pay sales tax on Amazon purchases. Competitors are also feeling the hit as Amazon now has free reign to open offices in those states and staff them with abandon. There are three major Amazon offices in the Bay Area already, and they're recruiting aggressively. But the same thing is happening in Seattle, Amazon's home court; Facebook and Google, among others, have opened offices there. 

This is not to mention the fact that some of the best employees can choose to work from wherever they want. It's rare, but not as much as it once was. Almost every company I've worked at in my career now has some employees who work by themselves out of some random place. They're good enough, and the demand for their skills so far outstrips supply, that they spend most of their work year in a remote destination of their choice, maybe a home office in their hometown in North Dakota.

All of this is good news for employees, who now have more options as the liquidity and efficiency of the labor market surges. For employers, it's difficult to say whether it's a positive or negative thing. If you treat it as a zero-sum game versus employees, it must by definition be a negative if employees are gaining.

Within the tech sector, though, one might hypothesize that companies that can offer more cash benefit from being able to compete for employees anywhere. Startups who need compete on the uncertain benefits of stock options more than cash now must contend with the tech giants, if even if the startups locate in more remote markets. However, this might just filter out those who who are risk averse and who'd flee at the first sign of adversity.

I began this post as an examination of the myth of the Bay Area mercenary culture, but the larger theme may really be the reduction of labor arbitrage in the tech industry. By no means has it disappeared entirely, I am not advising you start your next startup in Kansas City. If Jeff Bezos were starting Amazon.com in today's environment, though, it's not a slam dunk that he'd still choose Seattle.

Playing games on yourself

AT a meeting last Tuesday, I told my colleagues that I would finish this column — which is about deadlines — by noon on Thursday. I spent part of Tuesday afternoon searching the word “deadlines” on Google, but didn’t make much progress. By late afternoon, I felt a tiny knot of fear in my stomach. What if I let my co-workers down? So I wrote something silly just to get started. This paragraph.

On the motivating power of deadlines, however artificial they might be.

People respond well to deadlines because meeting them provides a distinct feeling of having achieved something within a time frame.

Amazing how you can fool yourself, even though you know you're doing so. This might seem irrational, but many self-help books seem to posit a notion of multiple selves, often adhering to that popular dichotomy that places an angel on one shoulder, a devil on the other. Succeeding in life, then, becomes a game in which you outwit the devil, the bad self.

The psychological poverty trap

Shafir and Mullainathan tested the intelligence of sugarcane growers in India during two different periods: after selling the harvest, when they enjoy relative prosperity, and before the harvest, when times are tightest. The farmers had better IQ results during the season of plenty. Before the harvest they had problems making fateful decisions, because of stress. The study concluded that poverty generates a psychology of its own.

Most of us judge poor people, viewing them at worst as lazy, at best as suffering from deficient financial behavior. We've gotten used to thinking that being poor is their fault: If they were smarter or more industrious they surely would have overcome their poverty.

Shafir, however, claims that the real culprit isn't lack of ability but problems created by poverty. "These problems are distracting and cause mistakes," he told Markerweek in an interview.

"When you're poor you're surrounded by bad decisions of people around you," he says. "You're so concerned about the present that you can't begin thinking about the future, and that's the big irony: People with the greatest need to think about the future don't have the leisure or emotional capacity to do so. The very essence of poverty complicates decisions and makes immediate needs so urgent that you start making wrong choices. These mistakes aren't any different from anyone else's, but they occur more frequently due to the element of stress, and their implications are much greater."

More of this insight into the psychological impact of poverty, all interesting, from poverty expert Eldar Shafir.

Clearly, fundamental attribution error when it comes to the poor is dangerous, especially as it comes to crafting policy to combat it. I hope a better understanding of the psychological and decision-making impact of poverty will lead to greater empathy on the part of those more fortunate.

Studies like this also point to some of the potential advantages of behavioral economics over classical economics, built around the concept of rational actors. Put someone in a situation of comfort and wealth, and they'll tend to behave more rationally than someone in poverty who has a staggering array of challenges weighing on them.

Previously posted here, also related: the persistence of poverty.

Microblogging as therapy

We all have those friends who post too frequently to Facebook, in ways that feel like overly transparent grasps at affirmation.

Perhaps we should be showing them more empathy. A new research paper from researchers at Wharton (PDF) argues that less emotionally stable people use more emotional status updates or tweets to help regulate their emotional well-being.

The current research investigates both the causes and consequence of online social network use. Low emotionally stable individuals experience emotions more intensely and have difficulty regulating their emotions on their own. Consequently, we suggest that they use the microblogging feature on online social networks (e.g., Tweets or Facebook status updates) to help regulate their emotions. Accordingly, we find that less emotionally stable individuals microblog more frequently and share their emotions more when doing so, a tendency that is not observed offline. Further, such sharing, paired with the potential to receive social support, helps boost their well-being.

So the next time a friend posts a status update that feels like a plea for help, maybe that's exactly what it is, and maybe your LIke is a cheap form of therapy. Free consumer surplus! Maybe all of Anil Dash's favoriting should be seen as a social service.

Happiness hacks

Happiness hacks are appealing as they're usually simple ways to wring more happiness out of life without having to really lose out in other ways. Dan Ariely addresses two common situations in this column in the WSJ:

  • Should you pay to park in a garage or spend time driving around looking for a street spot? 
  • How should you split dinner bills?

In Chinese culture, it's common to fight other diners to pick up the tab for dinner, and Ariely gives some psychological grounding for the logic of doing so.

The third approach, my favorite, is to have one person pay for everyone and to alternate the designated payer with each meal. If you go out to eat with a group relatively regularly, it winds up being a much better solution. Why? (A) Getting a free meal is a special feeling. (B) The person paying for everyone does not suffer as much as his or her friends would if they paid individually. And (C) the person buying may even benefit from the joy of giving.

Even before reading this Ariely column, we'd implemented something like this at work, primarily to minimize the psychic pain of transactional hassles like calculating bills, signing credit card bills, making change, etc.. When we were working out of a house in Menlo Park, we'd all go out to lunch together each day. Instead of splitting every bill, one person would always pick up the tab, and Nick, one of our developers, would snap a photo of the receipt and keep a running tally of who owed who. This made meals more pleasant for all of us. An ancillary benefit is that picking up bills accelerates the forming of tighter bonds between the people sharing the meal. Small financial commitments are a simple gateway drug for higher level covenants.

Another simple hack that some restaurants have put into place is pre-paying for meals. HIgh end restaurants like Next Restaurant (and now Grant Achatz's other restaurant Alinea) charge you for the meal when you score your reservation, often months in advance of the meal itself. This is beneficial for the restaurant since a single cancellation can kill a high end restaurant's margin for the night. But it's also good for the diner. The most unpleasant part of a fine dining meal is getting a staggering bill dropped on your lap while you're still trying to digest dessert. By pulling that pain up ahead so far, the meal can end more pleasantly. You get up with whatever they've given you as a takeaway gift, and often you can't even remember what you paid for the meal in the first place. The sacrifice for the diner is a bit of free choice on the food and beverages, but most fine dining restaurants have a fixed tasting menu anyhow, and choosing the wine is more taxing than empowering for most diners.

Riding with Uber offers a bit of this benefit since they have your credit card on file and you don't have to pay or calculate a tip when you get out of the vehicle. During the journey, there is no visible meter running so you can't stress the ever increasing bill you're due to pay ticking upwards in bright red numbers. The downside is that soon after your ride concludes, you get an email with the bill which often is your last memory of the ride. For all but the ultra wealthy, it's not the ideal way to end that transaction.

I would not be surprised to see Uber implement some type of discount for a pre-pay account where consumers might deposit $50 or some other amount at the start of the month and just deduct from it as you use the service during the month. Offering riders a discount for choosing this option makes sense. For one thing, pre-paying probably makes you more likely to choose Uber over a taxi since you want to use up your stored balance, especially if unused balances roll forward each month. More habitual usage then provides a greater volume of usage data for Uber to help drivers predict demand and routes ahead of time. Lastly, prepaid funds can provide some short float to Uber.

Companies in cities where Uber operates could be signed up for a corporate perks program in which the company could deposit a monthly stipend into each employee's Uber account. That would be a great way for Uber to introduce themselves to and acquire lots of new users en masse, in addition to being a great perk in a city like San Francisco, where I can never seem to find a cab when I really need one.

Pronoun usage: a psychological tell

Psychologies James Pennebaker has unearthed a hidden code in the way we use pronouns, and he shares some of the more intriguing findings in this interview

Basically, we discovered that in any interaction, the person with the higher status uses I-words less (yes, less) than people who are low in status. The effects were quite robust and, naturally, I wanted to test this on myself. I always assumed that I was a warm, egalitarian kind of guy who treated people pretty much the same.

I was the same as everyone else. When undergraduates wrote me, their emails were littered with I, me, and my. My response, although quite friendly, was remarkably detached -- hardly an I-word graced the page. And then I analyzed my emails to the dean of my college. My emails looked like an I-word salad; his emails back to me were practically I-word free.
More:

One of the most fascinating effects I’ve seen in quite awhile is that we can predict people’s college performance reasonably well by simply analyzing their college admissions essays. Across four years, we analyzed the admissions essays of 25,000 students and then tracked their grade point averages (GPAs). Higher GPAs were associated with admission essays that used high rates of nouns and low rates of verbs and pronouns. The effects were surprisingly strong and lasted across all years of college, no matter what the students’ major.

To me, the use of nouns -- especially concrete nouns -- reflects people’s attempts to categorize and name objects, events, and ideas in their worlds. The use of verbs and pronouns typically occur when people tell stories. Universities clearly reward categorizers rather than story tellers. If true, can we train young students to categorize more? Alternatively, are we relying too much on categorization strategies in American education?
Ben Zimmer reviews Pennebaker's book, The Secret Life of Pronouns, and offers one cautionary tale of note. Many political pundits excoriate Obama for using I/me/my too often in his speeches, but using statistical analysis, Pennebaker determines that Obama uses those words less frequently than any modern U.S. President (post Truman).

But Pennebaker warns that this doesn't mean Obama is less confident than past Presidents. In fact, it's a sign of his self-confidence that he uses first-person pronouns with such low frequency.