Meh

 From Adam Gurri's The Arc of the Universe Bends Towards Meh:

I think that McCloskey’s Bourgeois Dignity does a good job demolishing the notion that our prosperity rests on the continued poverty of others, and it is not alone in that. But if there’s one solid thing I took away from The Great Stagnation, and especially the debates that it sparked, it is that it is incredibly hard to think about progress, decline, and well-being, period.
 
Cultural conservatives often claim we’re in decline from the point of view of deteriorating values. Roy Baumeister’s book covering his work on willpower makes the claim that the Victorians took self-control much more seriously than we do, and we are the worse for it. More extreme claims of moral decline, are, of course, quite common. MacIntyre’s After Virtue tells you where he stands on the matter in his title. Mencius Moldbug, the the pseudonymous prophet of the Internet neoreactionaries, resurrects Carlyle among others to make the claim that we have fallen into complete lawlessness and immorality. He thinks the Victorians eradicated violent crime, but liberal ideology led us to abandon the very values that made such eradication possible.
 
Consider Baltimore. For those looking through the lens of decline, the writing is on the wall—the barbarians have stormed the gates, they are on the inside, they’re just waiting for the right moment to deliver the final blow to our crumbling civilization. For those looking through the lens of progress, the riots in Baltimore are unfortunate but the peaceful protests, and increased media scrutiny of cop violence there, in New York, and in Ferguson, Missouri all hint at a possibility of important reform. A chance to take a next step in the journey that began with the abolition of slavery, continued through the civil rights movement, and continues to this day, if we who inherited it make ourselves good caretakers.
 

I still can't escape the feeling that to seem smart, it's better to have strong opinions, loosely held, with about 50% of them being contrarian. Whether you're right or not matters very little in too many situations.

Good luck being born tomorrow

97% of people born tomorrow will be in a country that is authoritarian, communist, doesn’t support same sex marriage, does not allow abortion, supports capital punishment or has seen over ten thousand deaths in recent armed conflicts. Good luck!
 

From part 1 of 2 of Good Luck Being Born Tomorrow which includes some. The statistics within are eye opening.

The meat is in part 2.

300 million years ago, a supercontinent called Pangaea was formed, that later broke apart into continents that we inhabit today. Modern technology has turned the world back into Pangaea – a world where everything is connected. You can have a live video call with someone across the world in seconds (you’re welcome!) or you can find yourself on the next continent in 10 hours if the need be. Yet we’ve built these imaginary borders around us that limit human potential. These borders are a direct result of historic military conflict. And allowing your fate to be determined by things that took place before your birth feels like accepting defeat before you even get started.
 
This all brings me to the third option of what to do when the environment is not favorable – you can change it! As weird as it sounds, one of the means to cause change is actually also to migrate (for those who already have that freedom). As opposed to a slow democratic process of giving your marginal vote every four years in the hope of changing something you care about, you can vote with your feet already today. You have a choice between expressing your needs at a popularity contest twice a decade or putting constant pressure on places.
 
Not only will you find yourself in a place where your problem is already fixed (remember – that’s why you moved!), you’re also putting real budget pressure on the old place by taking your taxes elsewhere (will hurt every month). With enough people doing that, the competition for taxes forces incumbent states to fix their environments.
 
In positive political theory, this is described as the Tiebout hypothesis.
 

Living in the Silicon Valley media bubble, with its insatiable need to produce some minimum volume of news coverage every day, can lead to a surplus of technology naysaying in one's diet. I believe it's a useful corrective to have sites like Gawker and Valleywag and that ilk of gadfly to point out when the emperor has no clothes, even if the level of trolling is on the high side.

Still, climb up to higher vantage point and it's hard to disagree that technology is perhaps the greatest hope for lifting the standard of living for the most number of people in the world, whether directly or indirectly.

Yes, the tech industry has its problems, and it has its share of ridiculous douchebags, some of them with absurd amounts of wealth. And yes, perhaps some of our technology is too addictive, and maybe it is transforming some of us into intolerable social-network-preening narcissists.

Visit other parts of the world, though, and see what a life-changing event it is to get a cell phone and internet access. Observe people making a living selling goods on social networks, or watch people coordinate protests against authoritarian governments, and on and on. I'll continue to take the bad with the good for the net gain to society.

It’s not hard to imagine the invention of blockchain (the core of Bitcoin) having remarkable implications in the developing world through enabling micro-transactions and possibly helping eliminate corruption through blockchain based electronic voting. There’s stuff coming that we haven’t even thought of yet.
 
Technological innovation is finally making it possible to meet the assumptions of the Tiebout model (mobile consumers, complete information, abundant choices, telecommuting etc.). Bringing transparency into the world of basic freedoms, taxes, government services, public goods and reducing the cost/pain associated with moving will be the way to give us a future, where every nation state will have to compete for every citizen. Can you imagine that world?

Competing against robots

Some scholars are trying to discern what kinds of learning have survived technological replacement better than others. Richard J. Murnane and Frank Levy in their book “The New Division of Labor” (Princeton, 2004) studied occupations that expanded during the information revolution of the recent past. They included jobs like service manager at an auto dealership, as opposed to jobs that have declined, like telephone operator.
 
The successful occupations, by this measure, shared certain characteristics: People who practiced them needed complex communication skills and expert knowledge. Such skills included an ability to convey “not just information but a particular interpretation of information.” They said that expert knowledge was broad, deep and practical, allowing the solution of “uncharted problems.”
 
These attributes may not be as beneficial in the future. But the study certainly suggests that a college education needs to be broad and general, and not defined primarily by the traditional structure of separate departments staffed by professors who want, most of all, to be at the forefront of their own narrow disciplines. But this old departmental structure is still fundamental at universities, and it is hard to change.
 

Full article here from Robert Shiller.

A few random thoughts. Disciplines which are purely about knowledge accumulation are risky if the type of knowledge acquired is that which computers can accumulate in a fraction of the time. Lots of Ph.D's seem unlikely to be economically worthwhile considering the cost of higher education.

Watch the virtual assistant on your phone. Siri or Google Now are good benchmarks for what skills are becoming obsolete, and which are still of great value.

Most humans still prefer a bit of entropy and warmth from those they interact with, especially in the service sector, and indexing high on that still commands a premium.

Universal sign language

“Decide” is what is known as a telic verb—that is, it represents an action with a definite end. By contrast, atelic verbs such as “negotiate” or “think” denote actions of indefinite duration. The distinction is an important one for philosophers and linguists. The divide between event and process, between the actual and the potential, harks back to the kinesis and energeia of Aristotle’s metaphysics.
 
One question is whether the ability to distinguish them is hard-wired into the human brain. Academics such as Noam Chomsky, a linguist at the Massachusetts Institute of Technology, believe that humans are born with a linguistic framework onto which a mother tongue is built. Elizabeth Spelke, a psychologist up the road at Harvard, has gone further, arguing that humans inherently have a broader “core knowledge” made up of various cognitive and computational capabilities. 
 
...
 
In 2003 Ronnie Wilbur, of Purdue University, in Indiana, noticed that the signs for telic verbs in American Sign Language tended to employ sharp decelerations or changes in hand shape at some invisible boundary, while signs for atelic words often involved repetitive motions and an absence of such a boundary. Dr Wilbur believes that sign languages make grammatical that which is available from the physics and geometry of the world. “Those are your resources to make a language,” she says. As such, she went on to suggest that the pattern could probably be found in other sign languages as well.
 
Work by Brent Strickland, of the Jean Nicod Institute, in France, and his colleagues, just published in the Proceedings of the National Academy of Sciences, now suggests that it is. Dr Strickland has gone some way to showing that signs arise from a kind of universal visual grammar that signers are working to.
 

Fascinating. Humans associate language with intelligence to such a strong degree, I predict the critical moment in animal rights will come when a chimp or other monkey takes the stand in an animal testing court case and uses sign language to give testimony on their own behalf.

Reading the test methodology employed in the piece, I wonder if any designers out there have done any similar studies with gestures or icons. I'm not arguing a Chomskyist position here; I doubt humans are born with some basic touchscreen gestures or base icon key in their brain's config file. This is more about second-order or learned intuition.

Or perhaps we'll achieve great voice or 3D gesture interfaces (e.g. Microsoft Kinect) before we ever settle on any standards around gestures on flat touchscreens. If you believe, like Chomsky, that humans have some language skills (both verbal and gestural) hard-wired in the brain at birth, the most human (humane? humanist?) of interfaces would be one that doesn't involve any abstractions on touchscreens but instead rely on the software we're born with.

Data mining algorithms in plain English

Maybe not interesting if you're a data mining guru, but this explanation of the top 10 most influential data mining algorithms in plain English is a good read for the rest of us, though “plain English” is perhaps debatable.

Here's a good one, on k-means:

You might be wondering:
 
Given this set of vectors, how do we cluster together patients that have similar age, pulse, blood pressure, etc?
 
Want to know the best part?
 
You tell k-means how many clusters you want. K-means takes care of the rest.
 
How does k-means take care of the rest? k-means has lots of variations to optimize for certain types of data.
 
At a high level, they all do something like this:
  1. k-means picks points in multi-dimensional space to represent each of the k clusters. These are called centroids.
  2. Every patient will be closest to 1 of these k centroids. They hopefully won’t all be closest to the same one, so they’ll form a cluster around their nearest centroid.
  3. What we have are k clusters, and each patient is now a member of a cluster.
  4. k-means then finds the center for each of the k clusters based on its cluster members (yep, using the patient vectors!).
  5. This center becomes the new centroid for the cluster.
  6. Since the centroid is in a different place now, patients might now be closer to other centroids. In other words, they may change cluster membership.
  7. Steps 2-6 are repeated until the centroids no longer change, and the cluster memberships stabilize. This is called convergence.
     

This seems like a great idea for a book: the central data algorithms of the third industrial revolution, this networked, online age. One chapter per algorithm, with a discussion of how it manifests itself on the key websites, applications, hardware, and other services we use all the time now. If you are a data mining expert in need of someone to be the “plain English” side of a writing team, call me maybe.