The 130 million pixel camera

We all have them. Forget Apple's, the original retina display is still the best: the human eye.

The article is fascinating throughout. For example, the focal length of lens that best approximates human vision is not 50mm, as is commonly supposed, but 43mm. Its aperture is roughly f3.2 to f3.5. Since the human retina is curved, it is sharper in the corners than a camera sensor, which is flat and causes the corners of the sensor to be further away from the center. Of the human eyes' roughly 130 million pixels, only 6 million see color.

We are still waiting for some new type of connector or bus that will allow us to use retina displays larger than those on Macbook Pros today. The amount of data to transmit is beyond that of the existingThunderbolt connectors.

So how does your brain deal with 130 million pixels of information being thrown at it in a constant stream? The answer is it doesn't.

The subconscious brain also rejects a lot of the incoming bandwidth, sending only a small fraction of its data on to the conscious brain. You can control this to some extent: for example, right now your conscious brain is telling the lateral geniculate nucleus “send me information from the central vision only, focus on those typed words in the center of the field of vision, move from left to right so I can read them”. Stop reading for a second and without moving your eyes try to see what’s in your peripheral field of view. A second ago you didn’t “see” that object to the right or left of the computer monitor because the peripheral vision wasn’t getting passed on to the conscious brain.

If you concentrate, even without moving your eyes, you can at least tell the object is there. If you want to see it clearly, though, you’ll have to send another brain signal to the eye, shifting the cone of visual attention over to that object. Notice also that you can’t both read the text and see the peripheral objects — the brain can’t process that much data.

The brain isn’t done when the image has reached the conscious part (called the visual cortex). This area connects strongly with the memory portions of the brain, allowing you to ‘recognize’ objects in the image. We’ve all experienced that moment when we see something, but don’t recognize what it is for a second or two. After we’ve recognized it, we wonder why in the world it wasn’t obvious immediately. It’s because it took the brain a split second to access the memory files for image recognition. (If you haven’t experienced this yet, just wait a few years. You will.)

ADDENDUM: The way human vision works, always putting the center of your vision in focus and blurring the edges so as to avoid overwhelming your brain with data, is somewhat replicated in form by these hyperphotos. That is, you are presented a photo with some baseline of resolution, but as you drill in on particular sections, the photo zooms and increases the resolution.

Acquerello carnaroli rice

If you are making risotto, accept no substitutes for Acquerello carnaroli rice. Many use arborio rice, but carnaroli has an even shorter grain, and Acquerello ages their rice for a year and then seals it from the moisture in the can until you are ready to use it.

In true risotto, you should taste the integrity of each rice grain. Stirring too vigorously shatters grains and produces porridge. Not bad, but not risotto.

I learned this and some other useful tips in a cooking class with Chef Thomas McNaughton at Flour + Water on Monday night. I remade the risotto again tonight, and it came out great. An easy dinner party centerpiece as the preparation is not strenuous, and you and your guests can sip some wine while you stir.

Baumol's Cost Disease

It may not seem like an honor to have a term like "cost disease" named after you, but William Baumol's new book The Cost Disease is one of the more concise, enlightening economics books I've read recently.

Baumol's thesis is that certain service sectors, most notably healtchare and education, are doomed to outpace inflation because they are so dependent on labor. 

Is there hope for education costs coming down? Will Harvard and other universities with massive endowments decide to subsidize higher education? Unlikely. But disruption tends to not come from incumbents, so we wouldn't look there anyway.

Perhaps education cost disruption comes from something like online education. As Alex Tabarrok writes, online education gives teachers massive leverage.

In 2009, I gave a TED talk on the economics of growth. Since then my 15 minute talk has been watched nearly 700,000 times. That is far fewer views than the most-watched TED talk, Ken Robinson’s 2006 talk on how schools kill creativity, which has been watched some 26 million times. Nonetheless, the 15 minutes of teaching I did at TED dominates my entire teaching career: 700,000 views at 15 minutes each is equivalent to 175,000 student-hours of teaching, more than I have taught in my entire offline career.[1] Moreover, the ratio is likely to grow because my online views are increasing at a faster rate than my offline students.
Teaching students 30 at a time is expensive and becoming relatively more expensive. Teaching is becoming relatively more expensive for the same reason that butlers have become relatively more expensive–butler productivity increased more slowly than productivity in other fields, so wages for butlers rose even as their output stagnated; as a result, the opportunity cost of butlers increased. The productivity of teaching, measured in, say, kilobytes transmitted from teacher to student per unit of time, hasn’t increased much. As a result, the opportunity cost of teaching has increased, an example of what's known as Baumol’s cost disease. Teaching has remained economic only because the value of each kilobyte transmitted has increased due to discoveries in (some) other fields. Online education, however, dramatically increases the productivity of teaching. As my experience with TED indicates, it’s now possible for a single professor to teach more students in an afternoon than was previously possible in a lifetime.

I don't think online universities will ever adequately replace attending a university in person for certain things (socialization, live human feedback, and the signaling value of a degree from an actual university have tangible value), but I've taken several online courses and for certain subject matters they are more than adequate at transmitting knowledge.

Baumol argues that we shouldn't panic as much about the rising costs in healthcare and education because we're saving a lot of money in areas which aren't as dependent on labor, but that doesn't mean we shouldn't take a hard look at how to keep the costs in both of those areas down.

In healthcare, consumers are so removed from the actual cost side of the equation that it doesn't function much like an efficient marketplace at all. I see the doctor, I pay my copay of $15, and then when the bill comes out I have no idea whether I was given a good deal or not, I just hope my insurance covers as much of it as possible.

As for the value of what I receive from the healthcare industry, it's extremely difficult to gauge. Early in life, it tends to be very binary what I want. Cure my sinus infection. Fix my broken leg. Reconstruct my ACL. At the end of my life, the value equation shifts dramatically; still difficult to value, but in a completely different way. How do you quantify the value of an additional month of life? An additional year? Three years? And can you assign the proper amount of credit to the physician for

One reason Baumol's Cost Disease is more prevalent than it would otherwise be is that wages tend to be sticky. This was hammered home recently in the story of Hostess, which, in the face of declining sales, asked their worker unions to accept pay cuts, which the unions refused. That the executives had awarded themselves pay raises or that the root of the issue is really that no one eats Ding Dongs and Twinkies anymore doesn't negate the point that wages rarely go down in the real world, as they might in a truly efficient marketplace. I've actually never been at a company where any employees were asked to take a pay cut. Generally companies just freeze the salary of low performers, and that's enough of a signal that folks move on.

I find new-fangled labor marketplaces like TaskRabbit and Zaarly intriguing mostly as economic experiments in true wage elasticity. Companies don't generally try to ask people to work for lower wages, that's not a good signal in the recruiting marketplace. Rather, they'll approach it by trying to squeeze more work out of people at the same salary, which is a more subtle approach.

Low end disruption n the tech labor marketplace can happen, though it's most likely if initiated by the laborers themselves. In practice we call these people interns.

Regressive taxes

I doubt banks rank highly on companies that consumers love to death, but one reason I really dislike banks is their use of all sorts of insidious hidden charges to turn a profit on consumers. What's worse is that most of these penalties are regressive taxes.

As this article notes, overdraft fees for checking accounts can quickly add up. Banks don't work hard to keep you from the mistakes that trigger these fees, but often banks will waive these fees if you keep a lot of money with them. Thus these taxes tend to be regressive in nature, hitting lower income people the most.

I dislike state lotteries for the same reason: the people who tend to play are poorer and uneducated. Say what you will about paternalism, but I don't respect companies that build their business on regressive taxes.

Say what you will about Twitter's recent moves, but if the tax they're going to impose on their users is advertising, at least it's one that seems to cost the users who can afford it the most. Look at the composition of those who ponied up $50 for an App.net account or $20 for a license for Tweetbot for Mac. They're largely early adopter tech geeks with enough disposable income to choose to opt out of Twitter's taxes on their user experience. Most other people are happy to pay with their attention and subject themselves to a more constrained set of options to use Twitter rather than pay with cash.

Personally, I hate ads, and much of advertising can cynically prey on human weaknesses in ways that are purposefully detrimental to their well-being. But tech geeks can be overly hard on ad-supported businesses in a way that shows a callous disregard for the consumer surplus that so many of them bring massive user bases. By the very nature of ad-supported businesses, they generally help much larger groups of users than paid businesses.

Some of the Zynga games, like Farmville, seem to skirt dangerously close to being regressive taxes. Free to play, they attract many users who wouldn't pay for a game up front. After the players are trapped, the game starts offering up enticements that cost money. And even if you don't pay up for those, you have to pay with hours of your attention. Enjoyable hours, perhaps, but when I hear about people waking up in the middle of the night to milk virtual cows, or whatever it is people do in those games, alarm bells go off over who is paying what price.

[Somewhat related: in sports, many have made very reasonable arguments for legalizing the use of the safest performance enhancing substances. After all, the cost of hunting down retired athletes like Bonds, Clemens, and Armstrong cost over a hundred million dollars of taxpayer money, and the social benefit of them was arguable. But one argument for continuing to keep those substances out of sports is their super high cost. If they are essential to winning in sports, and with the lottery-like payouts inherent to many of them, allowing them just shuts lower income folk out of competing at the highest levels. Being a professional athlete is expensive enough as it is.]

One very simple way we tech geeks can do some social good is by assessing the social impact of the products and services we use. You generally don't see that dimension considered in any of the traditional review of tech products from the Mossbergs and Pogues and everyone else (myself included) because we're so accustomed to measuring them on dimensions like speed, resolution, usability, and things like that.

The desire to do good and the profit imperative are uneasy bedfellows.