Learning curves sloping up and down

One of the great inefficiencies of humanity as a species is the need to re-educate every successive generation. I think of this when playing with my nieces and nephews and my friends' children. Adults have to spend so much time teaching infants and children things we've already learned, and the process of knowledge transfer is so lossy. The entire education system can be seen as a giant institution for transferring knowledge from one generation to the next, like some crude disk drive, and these days it's rising in price despite not measurably improving.

Artificial intelligences need not go through this because they don't die abruptly like humans, they can evolve continuously without hard resets. This is one of its chief advantages over human intelligence. To take a modern example, self-driving cars should only improve from here on out, and each new one we build can be as smart as the smartest self-driving car as soon as it's assembled. Every node on the network has access to the intelligence of the network.

All this human intelligence cut short by mortality is a curse, but given human nature, it is also critical to forward progress. People's views calcify, so death is a way of wiping the slate clean to make way for ideological progress. Part of why racism and sexism, to take two social ills, decline over time is simply that the racists and sexists die out.

This plays out at a corporate level, too. Companies can have both too long and too short a memory. New employees have to be taught the culture and catch up to what others before them learned so they can be as productive as possible. On the other hand, institutions can become set in their ways, less adaptive as their environments evolve. New blood can bring fresh eyes.

One form of this is institutional trauma. A company tries to enter a space, fails, and doesn't venture into that space ever again, even if the timing for entry shifts to a more favorable one. I look at a product like Google Wave and think that if Google had stuck with it, they might have built something like Slack.

Why do companies slow down as they grow larger? One reason is that in a hierarchical organizational structure, the more people and more levels you pile in, the more chances someone somewhere will say no to any idea. Bureaucracy is just institutionalized veto power growing linearly with organizational size.

One theory for why evolution gives us just enough of a lifespan to bear offspring but not stay around too long is that it reduces competition for resources for our offspring. Old timers who rise in an organization can compete for resources with new employees, but without the disadvantage of old age. Most who survive at a company have risen to the level where they have disproportionate institutional power. It's often deserved, but it's also dangerous. True disruption of a company is difficult to counter because it attacks the strongest part of your business, and that division or unit tends to be the one that has the most power in the organization.

Companies try to counter this by dividing themselves into smaller units even as they grow in the aggregate. Jeff Bezos tried localizing decision-making power at Amazon in what he called two-pizza teams (the size of the team being one that could be fed by two pizzas). Facebook acquires companies like Instagram and WhatsApp but lets them run largely independently. Google's new Alphabet org structure breaks itself into a looser coalition of entities where each division has more degrees of freedom strategically. All are attempt to keep the weight of bureaucratic middle management off of the creatives, to preserve greater dimensionality and optionality throughout the organization.

Amazon is one company which often wins just by being more patient than its competitors, playing games on a much longer time scale than most. It tends to be less susceptible to institutional trauma than most. Of course, part of this is the result of the unique ownership structure that companies like Amazon, Facebook, and Google have managed to pull off: ultimate decision-making power rests in the hands of the founders even as they leverage the benefits of the public market.

However, it's more than that. When Bezos was asked at Recode last year how he decided when to give up on a project, he said something striking: we give up on something when the last high judgment person in the room gives up on it.

What a brilliant heuristic. Simple and memorable. Of course, deciding who is high judgment is its own challenge, but this concept reverses the usual problem of bureaucracy, which is it takes only one person saying no to kill something. Jeff reverses that; he wants the company to be as smart on any topic as its single smartest person. 

At some point in life, it probably is rational to be that old dog who eschews new tricks. If you're going to die soon anyhow, you're more likely to just suffer the discomfort of having to adjust and then die before you can reap any awards. The corporate version of this is the concept that most executives should just squeeze the maximum profit out of their existing thinking and not bother trying to stave off disruption. It might be more energy and resource efficient to just have some of the stalwarts die off rather than shift the thinking of tens of thousands of employees, or change a culture which has evolved over decades.

How to allocate subsidies most effectively

Sometimes you hear something that sounds so much like common sense that you end up missing how it overturns everything you were actually thinking, and points in a far more interesting and disturbing direction. That’s how I’m feeling about the coverage of a recent paper on student loans and college tuition coming out of the New York Federal Reserve, “Credit Supply and the Rise in College Tuition: Evidence from the Expansion in Federal Student Aid Programs,” by David Lucca, Taylor Nadauld, and Karen Shen.
They find that “institutions more exposed to changes in the subsidized federal loan program increased their tuition,” or for every dollar in increased student loan availability colleges increased the sticker price of their tuition 65 cents. Crucially, they find that the effect is stronger for subsidized student loans than for Pell Grants. When they go further and control for additional variables, Pell Grants lose their significance in the study, while student loans become more important.
There’s been a lot of debate over this research, with Libby Nelson at Vox providing a strong summary. I want to talk about the theory of the paper. People have been covering this as a normal debate about whether subsidizing college leads to higher tuition, but this is a far different story. It actually overturns a lot of what we believe about higher education funding, and means that the conservative solution to higher education costs, going back to Milton Friedman, will send tuition skyrocketing. And it ends up providing more evidence of the importance of free higher education.

Thus begins this piece by Mike Konczal, fascinating throughout. This is a true mystery, because why does tuition rise more student loans are available, and why doesn't it rise just as much if funding comes in the form of Pell Grants? Konczal explains why this is strange:

David Boaz at the Cato Institute has a snarky post in response to the study, saying that “[u]nderstanding basic economics” would have predicted it. This is false, because economics 101 would have predicted the opposite. Economists fight a lot about this [1], but the simple economics story is clear. According to actual economics 101, letting students borrow against future earnings should have no effect on prices.
This derives from something called the Modigliani-Miller Theorem (MM), the frustrating staple of corporate finance 101 courses. A quick way of understanding MM is that how much you value an asset or investment, be it a factory or higher education, should be independent of how you finance it. Whether you pay cash, a loan, your future equity, a complicated financial product, or some other means that doesn’t even exist yet, you ultimately value the asset by how profitable and productive it is. In this story, which requires abstract and complete markets, expanding credit supply won’t drive tuition higher.
Now what would change your valuation, according to this theorem, is getting subsidies, say in the form of Pell Grants. This would make you willing to buy more and pay a higher price. This is one of the reasons why so much of the economics research focuses on Pell Grants instead of student loans: the story about what is happening is clearer. But, again, extensions of the credit supply, not subsidies, are doing the work here.

Conservatives position an increase in the student credit supply as enabling them to borrow against future earnings. I even read somewhere last year about a company that wanted to allow actors or other celebrities to sell ownership of their future owners. You could become a shareholder of, say, Jennifer Lawrence by fronting her some cash now in exchange for her take from future Hunger Games and X-Men movies and whatever else she does.

In the case of education, this entire proposal doesn't work if the increase in credit supply is met with an equal increase in tuition. Why does tuition increase in lock step with credit supply? Konczal isn't sure, and it's the central mystery.

Note that it isn’t clear why students borrowing more against their future is driving increases in tuition they’ll pay. It could be “rational” under arcane definitions of that word. It could be that in a winner-take-all economy, in which those at the top do fantastically and those who don’t make it do not make it at all, leveraging up and swinging for the fences is a smart play. It could be that liquidity and credit are important determinants of the economy as a whole rather than a neutral veil over real resources. It could be as simple as the fact that 18-year-olds aren’t highly calculating supercomputers solving thousands of Euler equations of their future earnings into an infinite future, but instead a bunch of kids jacked up on hormones doing the best they can with the world adults provide them.

This article on public options dives in deeper on the topic.

So far, so familiar. The interesting question is what happens when we generalize this logic to other areas, like higher education. Imagine a state that's considering a choice between spending, let's say, $1 million either subsidizing its public university system, enabling it to keep tuition down, or as grants to college students to help them pay tuition. On the face of it, you might think there's no first-order difference in the effect on access to higher ed -- students will spend $1 million less on tuition either way. The choice then comes down to the grants giving students more choice, fostering competition among schools, and being more easily targeted to lower-income households; versus whatever nebulous value one places on the idea of public institutions as such. Not surprisingly, the grant approach tends to win out, with an increasing share of public support for higher education going to students rather than institutions.

But what happens when you bring price effects in? Suppose that higher education is supplied inelastically, or in other words that there are rents that go to incumbent institutions. Then some fraction of the grant goes to raise tuition for existing college spots, rather than to increase the total number of spots. (Note that this must be true to at least some extent, since it's precisely the increased tuition that induces colleges to increase capacity.) In the extreme case -- which may be nearly reached at the elite end -- where enrollment is fixed, the entire net subsidy ends up as increased tuition; whatever benefit those getting the grants get, is at the expense of other students who didn't get them.

Conversely, when public funds are used to reduce tuition at a public university, they don't just lower costs for students at that particular university. They also lower costs at unsubsidized universities by forcing them to hold down tuition to compete. So while each dollar spent on grants to students reduces final tuition costs less than one for one, each dollar spent on subsidies to public institutions reduces tuition costs by more.
The same logic applies to public subsidies for any good or service where producers enjoy significant monopoly power: Direct provision of public goods has market forces on its side, while subsidies for private purchases work against the market. Call it progressive supply-side policy. Call it the general case for public options. The fundamental point is that, in the presence of inelastic supply curves, demand-side subsidies face a headwind of adverse price effects, while direct public provision gets a tail wind of favorable price effects. And these effects can be quite large.

Education is the not the same as schooling

There has arisen a kind of parallel network – a lot of it is on the Internet, a lot of it is free – where people teach themselves things, often very effectively. But there is a kind of elitist bias: people who are good at using this content are people who are already self-motivated. 
The better technology gets, the more human imperfections matter. Think about medicine: the better pharmaceuticals get, the more it matters which people neglect to actually take them in the right doses. Education is entering the same kind of world. There’s so much out there, on the Internet and elsewhere. It’s great; but that means that human imperfections, like just not giving a damn, will matter more and more.
What concrete changes would I make in schools? The idea that you need to take a whole class to learn some topic is absurd. Whatever you’ve learned is probably going to be obsolete. A class is to spur your interest, to expose you to a new role model, a new professor, to a new set of students. We should have way more classes which are way shorter. It should be much more about learning, more about variety, give up the myth that you’re teaching people how to master some topic; you’re not! You want to inspire them; it’s much more about persuasion, soft skills. 

Short and sweet from Tyler Cowen.

Related, competency-based education:

For the most part, colleges and universities have changed very little since the University of Bologna gave the first college lectures in 1088. With the exception of Massive Open Online Courses, or MOOCs—free lectures and courses on the Internet—most university learning still requires students to put their butts in seats for a certain number of hours, complete a list of courses, and pass tests demonstrating that they learned from those courses (or were able to successfully cram for over the course of a few days).
But a new model is upending the traditional college experience, and has the potential to change the way universities—both new and old—think about learning.
Called competency-based education, this new model looks at what students should know when they complete a certain degree, and allows them to acquire that knowledge by independently making their way through lessons. It also allows students who come into school with knowledge in a certain area to pass tests to prove it, rather than forcing them to take classes and pay for credits on information they already know.

A model that focuses horizontally on the accreditation function of schools, rather than competing with the full vertical stack offered by a university. Seems like a model that could be useful in companies, or to companies, as well. Today, for many job functions, say product management, a college resume only obliquely hints at competencies, it functions more as some signal of one's generalized learning ability and willpower.

Software engineering interviews have a version of competency tests in the form of coding questions or challenges, but lots of business competencies aren't really tested optimally with a live interview. The ideal interview is over a longer period of time, goes into more depth, and in its most optimal form may be just an internship, but not all candidates are willing or able to do an internship, especially those who aren't in college or just graduating.

Face-to-face interviews are good for testing chemistry (which makes it a reasonable method for job roles where that's a key attribute, like sales), but they are susceptible to all sorts of unconscious biases and often just flatter the interviewers into believing in their powers of observation. It would be interesting to compare face-to-face interviews to a competency-based interviewing method that eschews in-person exchanges altogether. As radical as that might sound, I'm confident it would lower many types of discrimination.

For profit schools

These 20 schools are responsible for a fifth of all graduate school debt. Can you guess any of them? I only guessed one, Devry University, and I've never even heard of the top school on the list.

I'd be curious what degrees students would pursue if they were given a report their first week in school showing the job prospects of graduates: employment rates, mean and median salaries, years to pay off school debt, etc.

(h/t @kenwuesq)

Is there less training in the knowledge economy?

I am very much an Uber fan, but if you are looking for drawbacks that passage expresses one potential problem.  Pre-Uber, acquiring worker talent required lumpier investments on the part of the employer.  You would hire a bunch of people, with the expectation of keeping them around for a while, and then train them to do a bunch of things.  Some of them would work their way up the proverbial ladder, based on what you had taught them, many would not.  But you would train and teach them quite a bit, if only because there was no alternative for getting things done.

In a “sharing economy,” a pre-trained worker is very often on call for a short stint, when needed.  The employer thus has less need to invest in option value from the full-time work force and that means less training.  The result is that more workers will have to teach and train themselves, whether for their current jobs or for a future job they might have later on.

I submit many people cannot train themselves very well, even when the pecuniary returns from such training are fairly strongly positive.  The “at work social infrastructure” for that training is no longer there, and so many sharing economy workers will stay put at their ex ante levels of knowledge.

Tyler Cowen on the deficit of training in the sharing economy.

It's not just the sharing economy, though. The whole knowledge economy sector seems to put less into employee training. Is this different than in decades past? I've worked my whole career in this space, I have no basis for comparison to a bygone era.

The usual caveats about causation/correlation apply, but with employee tenures being so short in the knowledge economy, perhaps it's not surprising that employee training has diminished. The pace of change, the dynamic competition, the rapid growth, and the constant organizational reconfigurations are other factors that lower the return on investment to on-the-job training. Though it may be cheaper to groom someone from within, it's tempting for the leaders in tech to just poach talent from other companies, never has it been so easy to identify top people within other company walls (thanks to services like LinkedIn and the generally higher connectedness of tech employees in this networked age). 

There are exceptions, of course, but it's best to head into tech assuming you'll need to invest heavily in self-training and be pleasantly surprised if things turn out differently. Most tech executives I know are stretched so thin that actively training others can't even find room on the bottom of the list.

When I speak to most younger students and recent grads, I advise them to think of their education as a lifelong endeavor and not something that ends when they palm their college diploma. Especially in technology, many people will likely acquire new skills multiple times in their career, a college degree serving just as the first notable signal that they're responsible learners.

The positive is that the internet and web have created a vast reservoir of free knowledge. The difficulty is making sense of it all. White collar job knowledge, especially in tech, is either trapped inside specific people or company's heads or it's out there but badly archived and organized.

Competing against robots

Some scholars are trying to discern what kinds of learning have survived technological replacement better than others. Richard J. Murnane and Frank Levy in their book “The New Division of Labor” (Princeton, 2004) studied occupations that expanded during the information revolution of the recent past. They included jobs like service manager at an auto dealership, as opposed to jobs that have declined, like telephone operator.
The successful occupations, by this measure, shared certain characteristics: People who practiced them needed complex communication skills and expert knowledge. Such skills included an ability to convey “not just information but a particular interpretation of information.” They said that expert knowledge was broad, deep and practical, allowing the solution of “uncharted problems.”
These attributes may not be as beneficial in the future. But the study certainly suggests that a college education needs to be broad and general, and not defined primarily by the traditional structure of separate departments staffed by professors who want, most of all, to be at the forefront of their own narrow disciplines. But this old departmental structure is still fundamental at universities, and it is hard to change.

Full article here from Robert Shiller.

A few random thoughts. Disciplines which are purely about knowledge accumulation are risky if the type of knowledge acquired is that which computers can accumulate in a fraction of the time. Lots of Ph.D's seem unlikely to be economically worthwhile considering the cost of higher education.

Watch the virtual assistant on your phone. Siri or Google Now are good benchmarks for what skills are becoming obsolete, and which are still of great value.

Most humans still prefer a bit of entropy and warmth from those they interact with, especially in the service sector, and indexing high on that still commands a premium.

The end of routine work

In recessions of the 1960s and 1970s, routine jobs would fall during the recession but quickly snap back. But after the recession in 1990, something changed. Routine jobs fell and, as a share of the population, never recovered. In the recessions in 2001 and in 2007-09 they fell even further. The snapback never occurred, suggesting that many firms began coping with recessions by scrapping tasks that could be automated or more easily outsourced.
For his part, Mr. Siu thinks jobs have been taken away by automation, more than by outsourcing. While some manufacturing jobs have clearly gone overseas, “it’s hard to offshore a secretary.” These tasks more likely became unnecessary due to improving technology, he said.
In the late 1980s, routine cognitive jobs were held by about 17% of the population and routine manual jobs by about 16%. Today, that’s declined to about 13.5% and 12%. (The figures are not seasonally adjusted and so are displayed in the chart as 12-month moving averages, to remove seasonal fluctuations).


But they are not among the labor market’s pessimists who fear that robots will render humans obsolete. Their work shows the economy has continued to generate jobs, but with a focus on nonroutine work, especially cognitive. Since the late 1980s, such occupations have added more than 22 million workers.

By Josh Zumbrun. The idea that U.S. unemployment has jumped to a higher plateau because of jobs moving overseas or because they're replaced by technology is not a new one, but it's useful to see data and charts to support the claim.

Brad Delong comments:

Note that these jobs are “routine” only in the sense that they involve using the human brain as a cybernetic control processor in a manner that was outside the capability of automatic physical machinery or software until a generation ago. In the words of Adam Smith (who probably garbled the story):
In the first fire-engines, a boy was constantly employed to open and shut alternately the communication between the boiler and the cylinder, according as the piston either ascended or descended. One of those boys, who loved to play with his companions, observed that, by tying a string from the handle of the valve which opened this communication to another part of the machine, the valve would open and shut without his assistance, and leave him at liberty to divert himself with his playfellows. One of the greatest improvements that has been made upon this machine, since it was first invented, was in this manner the discovery of a boy who wanted to save his own labour…
And Siu and Jaimovich seem to have gotten the classification wrong: A home-appliance repair technician is not doing a routine job–those jobs are disappearing precisely because they are not routine, require considerable expertise, are hence expensive, and so swirly swapping out the defective appliance for a new one is becoming more and more attractive.

As any economist would prescribe, it's become more critical when thinking about one's career and education to focus on humans' comparative advantage versus computers. But I also recommend people focus on their own unique comparative advantage: any intelligent person can do many things, but what can you do better than most anyone else? In winner-take-all markets, more common in this third industrial revolution, it's ideal to give yourself the best chance to be one of those winners, and consider it a bonus that those areas often overlap with one's personal passions (whatever you think of the 10,000 hour rule, most would agree it's easier to sustain through so many hours if one is more emotionally invested).

It's also best to accept that one will have to learn new skills many times in one lifetime. It used to be that once you finished college, the education phase of life was considered over. This is already obsolete for many. Even most programmers, supposedly the most insulated workers from technological job obsolescence, have to learn new programming languages or technologies on the job every few years now.

In the future, education will generally be accepted to mean a lifelong process. Continuing education will be the default. An undergraduate degree will simply be a first milestone in signaling one's skills and sociability to potential employers. More time will have to be set aside to level up, and resources and services to support this lifelong education continue to proliferate. I view the need for lifelong learning as a positive. Who was it that said that you're young at heart to the degree that what you want to learn exceeds what you already know?

Seriously, who said that? I don't know. Add that to my list of things to learn.

Harvard's discrimination against Asian Americans

A similar injustice is at work today, against Asian-Americans. To get into the top schools, they need SAT scores that are about 140 points higher than those of their white peers. In 2008, over half of all applicants to Harvard with exceptionally high SAT scores were Asian, yet they made up only 17 percent of the entering class (now 20 percent). Asians are the fastest-growing racial group in America, but their proportion of Harvard undergraduates has been flat for two decades.


The most common defense of the status quo is that many Asian-American applicants do well on tests but lack intangible qualities like originality or leadership. As early as 1988, William R. Fitzsimmons, Harvard’s dean of admissions, said that they were “slightly less strong on extracurricular criteria.”

Even leaving aside the disturbing parallel with how Jews were characterized, there is little evidence that this is true. A new study of over 100,000 applicants to the University of California, Los Angeles, found no significant correlation between race and extracurricular achievements.

The truth is not that Asians have fewer distinguishing qualities than whites; it’s that — because of a longstanding depiction of Asians as featureless or even interchangeable — they are more likely to be perceived as lacking in individuality. (As one Harvard admissions officer noted on the file of an Asian-American applicant, “He’s quiet and, of course, wants to be a doctor.”)

That's Yascha Mounk, a Harvard professor, in a NYTimes op-ed. There is a particular sting in this discrimination coming from Harvard since, as any Asian American first generation kid like myself will tell you, Harvard remains, for our parents, Earth's academic holy grail.

Given how hard it is to distinguish one extremely qualified applicant from the next (regardless of what admissions committees might claim), it would be more fair to set some bar for qualifying to get into Harvard and then put all those who pass the bar into a lottery for admission.