Human and computer curation in the age of information abundance

I don't usually talk shop here, but I wanted to spend a few words discussing some of the key strengths of the new version of Flipboard we just shipped, v3.0 on the odometer. Though I've only been at the company just over a year and a half, Flipboard has been around since just after the first iPad shipped. In many ways, it's the first iPad app I can remember because it was the first native iPad app, the first app that didn't seem like a port of an iPhone app. It launched at first just for the iPad, and it wasn't until later that the a version of Flipboard for the phone shipped.

Because of its long history, so many of my readers may have played with Flipboard at some point in the past and forgotten about it. Many of my readers or Twitter followers are what I consider to be among the top 1% of voracious information consumers, and for them, high density information presentation, typically an RSS reader with hundreds of RSS feeds, or Twitter, with its neverending supply of links and headlines and thoughts, are the weapons of choice. I can understand that, and I use both as well.

But both of those options offer a certain inherent level of noise because of how they work. On Twitter it's because people you follow have lots of interests, some of which match yours, some of which don't. Sometimes they point you at things outside your sphere of interests, but once you're following several hundred people, as I am, the multitude of noise can pile up.

The RSS feed junkie is sort of the predecessor to the Twitter information junkie, and I know people who subscribe to literally hundreds of RSS feeds in a feed reader and try to keep up with the flood of headlines each day. The problem is the same there; it's almost inevitable that a list of that many information sources will inject a lot of perceived junk into your mental diet.

Though it doesn't feel like it to the top 1% of information addicts, cultivating a list of hundreds of people to follow or hundreds of RSS feeds is exhausting and tedious for the 99%. Keeping up is its own burden; I long ago gave up trying to read every tweet in my Timeline, or every headline from every RSS feed I follow.

At the same time, FOMO is a very real phenomenon not just in the real world but in the information space, and it's a natural outgrowth of the internet, the most efficient delivery mechanism of data that has ever been invented in the history of mankind. Effectively, there is a near infinite amount of information out there, and more is being generated each day than one person could read in a lifetime. Supply is no longer the problem; oversupply is.

Given that we all have a finite amount of attention to give each day, how do we allocate it most efficiently? Flipboard 3.0 is different strategy for answering that problem, and it's different not only from the other apps/services outlined above but even from past versions of itself. For that reason, it has something to offer both the power information consumer and the casual information grazer.

The new Flipboard, designed specifically for phones and built in part off of the great technology acquired with our purchase of Zite, centers around a person's interests. The combination of any person's interests forms a sort of intellectual fingerprint, and that is the North star for personalization in the app. You might find it challenging to curate a list of all the experts in a field across their blogs, websites, research papers, Twitter and Facebook accounts, etc. Multiply that across all of your interests, and the task of following the right sources and people grows by leaps and bounds. I follow nearly 900 people on Twitter and I'm still adding and deleting people all the time.

The new Flipboard simply asks for your interests, a very finite and manageable list, and then uses that to find the best articles for you on those topics, wherever those might reside, and regardless if you know or follow the authors or blogs. We offer over 30,000 such topics to choose from, and if you have no idea how to know if that's a good number, I can assure you it gets deep into the long tail. And it's growing all the time; when we started work on this release, there was no topic in our database called “Peter Piot”. We hadn't seen enough recent articles of note about Piot. And then a two year old boy died in West Africa, and the world changed. Peter Piot helped discover Ebola. New topics arise every day; a few weeks ago, we added “Apple Pay”, and soon we may have to add CurrentC, though maybe that will disappear before it has a chance to become a thing.

It's in the discovery of content on these tens of thousands of topics where computers and algorithms have a huge advantage over humans. You might think of Flipboard as an intelligent agent, an AI that can read and screen millions of new articles written each day, something no human can come close to doing. This intelligent agent knows what you're interested in and understands when it finds an article on that topic, and it can do that for millions of users on tens of thousands of topics across millions of articles every day.

However, to say human have been rendered obsolete in yet another field couldn't be further from the truth. It's of little use if your digital reader brings back every article on every topic you care about. That's still way an information deluge. Remember, the goal here is efficient allocation of your limited attention. Your intelligent agent also needs to separate the signal from the noise.

This is where humans remain in play. It turns out computers are good at determining the topic of article but not the quality. To do that, we turn to the aggregate reading and curation activity of millions of readers. The advantage of some scale in this space is that there are enough eyeballs staring at enough items of interest that you need only follow the collective gaze to find what's interesting. This is especially true with content that isn't based on text, like photographs, where computer vision is still fairly crude in comparison to human vision when it comes to understanding both subject matter and aesthetic value. Ask any social service of scale if they'd trust a computer to moderate content for pornography and other disturbing imagery; well over a 100,000 workers worldwide filtering the filthy and the horrific from our social streams are proof they don't.

[By the way, even with textual content, computers aren't perfect. A topic like “magic” is a good example. Computers will bring back content on the Orlando Magic, card tricks, and articles with headlines referring to Roger Federer as a magician with a tennis racket.]

Just as the best chess players in the world are not grandmasters or top computer programs but some combination of human player with a chess program, the best curation is still some combination of human judgment and a variety of computer algorithms. Someday perhaps there will be some digital general intelligence of such power that human taste is superfluous, but today is not that day.

Even all of this is not enough, though. It turns out that if you give every person exactly all these interesting articles on all the topics they tell you they want to see, they get bored. People are always chasing after the unexpected delight, it gives them a mental rush to stumble across a fascinating article they wouldn't have expected us to realize they'd like. This is the serendipity that is habit-forming.

Some argue that the serendipity can consist of just noise. That presumes people enjoy sifting through the irrelevant to find the gold dust. I think some people enjoy the hunt, but many are not so persistent, so our goal is for even the serendipity in Flipboard to be of interest. There are many approaches to achieving this, and I won't delve into the technology behind it all, it's a lot of math. All that matters for readers is that it works, because when it does it feels like magic.

One last thing: a person's interests evolve over time. You may choose a bunch of topics in your initial setup, but even if you don't alter that list again, it's possible to tell, based on your reading behavior, when your tastes, or what we call your affinities, change. The more you read and like and curate on Flipboard, the more your Flipboard will start to feel like a pair of raw denim jeans that break in and mold to your legs, or like a sportcoat or dress that's being tailored for you a bit each day.

My recommendation for both folks who may have used Flipboard before but haven't used it in a while and for loyalists: update to the new app, version 3.0 for those who still care about such designations, and follow a bunch of topics, and then follow even more, the more specific the better. At last count, I was following 90 topics on Flipboard, and that number has been growing by a few each week I've been playing and testing the app.

Here are just a few topics I'm following to give you a sense of the breadth of our database:

  • Joe Maddon - soon to be announced as the new Cubs manager, I hope.
  • Nikon D750 - I haven't upgraded my Nikon digital SLR for several generations, but I changed my mind for this baby. Finally, Nikon adds wi-fi so I can get a photo from my SLR to my phone quickly. Why they didn't add this years ago still boggles my mind.
  • Peter Thiel and Marc Andreessen - interesting thinkers in the technology world
  • Paul Thomas Anderson
  • Driverless Cars
  • GIF Animations - we should probably just rename this to Animated GIFs, but maybe this is how they're referred to in high company.
  • Augmented Reality
  • E-learning
  • David Foster Wallace
  • Information Design
  • Cinematography
  • Roger Deakins - speaking of cinematography...Deakins should've won an Oscar in 2008 for The Assassination of Jesse James by the Coward Robert Ford. Got a double nomination in that category that year! The other was for No Country for Old Men
  • Neuroscience
  • Cycling
  • Sports Analytics
  • Technological Singularity
  • Optical Illusions
  • Memes
  • Sneakers
  • Tesla
  • Elon Musk - the entrepreneur whose courage and bravado are the envy of the Valley.
  • Chicago Cubs - finally, a light at the end of the tunnel. Fellow Cubs fans, it's about to get good.
  • Interior Design - I finally bought a condo in June, I'm still in the midst of renovation and decoration hell, someone tell me it gets better.

One thing I've often felt in trying lots of services like Nuzzel and Quibb on top of Twitter and RSS Readers is seeing the same set of articles referred to me several times. I'll see it first appear on Twitter, then a short while later on services like Nuzzel or Quibb, and then a day later in emails of top tweets from Twitter, and then even later on some blogs I follow.

What was always refreshing about throwing Zite and now Flipboard into the mix was the sense of discovering things I hadn't already seen many times. It has taken a lot of tuning and testing to dial that in, and it's an ongoing project, but when it works it feels like magic, and it gives me the feeling of being ahead of the herd in finding things to link to on my blog or to post on Twitter.

Okay, I've stepped off my work soapbox. I'm biased, of course, so don't take my word for it, listen to Farhad Manjoo at the NYTimes. Or if you don't want to listen to Farhad, take the word of Jennifer Garner, who revealed her favorite app is Zite, whose core technology now powers key pieces of the new Flipboard.

While Garner may not have a big social media presence, she did reveal her favorite app. "Zite," she said. "It's like a magazine and it sends you all your favorite things. So mine are West Virginia, world news, kids, parenting, relationships, healthy living, food and cooking…It just curates exactly what I want to read."

She is definitely way more beautiful than I am (I'm not sure about Farhad, who I've never met), and with kids to raise and a Hollywood career, she's also busier and has a higher opportunity cost for her time, so if we can earn her trust in allocating her minimal free time, maybe there's something there.

Moravec's Paradox and self-driving cars

Moravec's Paradox:

...the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources. The principle was articulated by Hans MoravecRodney BrooksMarvin Minsky and others in the 1980s. As Moravec writes, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility."[1]

Linguist and cognitive scientist Steven Pinker considers this the most significant discovery uncovered by AI researchers. In his book The Language Instinct, he writes:

The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived... As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.

I thought of Moravec's Paradox when reading two recent articles about Google's self-driving cars, both by Lee Gomes.

The first:

Among other unsolved problems, Google has yet to drive in snow, and Urmson says safety concerns preclude testing during heavy rains. Nor has it tackled big, open parking lots or multilevel garages. The car’s video cameras detect the color of a traffic light; Urmson said his team is still working to prevent them from being blinded when the sun is directly behind a light. Despite progress handling road crews, “I could construct a construction zone that could befuddle the car,” Urmson says.

Pedestrians are detected simply as moving, column-shaped blurs of pixels—meaning, Urmson agrees, that the car wouldn’t be able to spot a police officer at the side of the road frantically waving for traffic to stop.

The car’s sensors can’t tell if a road obstacle is a rock or a crumpled piece of paper, so the car will try to drive around either. Urmson also says the car can’t detect potholes or spot an uncovered manhole if it isn’t coned off.

More within on some of the engineering challenges still unsolved.

In his second piece, at Slate, Gomes notes something about self-driving cars that people often misunderstand about how they work (emphasis mine):

...the Google car was able to do so much more than its predecessors in large part because the company had the resources to do something no other robotic car research project ever could: develop an ingenious but extremely expensive mapping system. These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway.

That might not seem like such a tough job for the company that gave us Google Earth and Google Maps. But the maps necessary for the Google car are an order of magnitude more complicated. In fact, when I first wrote about the car for MIT Technology Review, Google admitted to me that the process it currently uses to make the maps are too inefficient to work in the country as a whole.

To create them, a dedicated vehicle outfitted with a bank of sensors first makes repeated passes scanning the roadway to be mapped. The data is then downloaded, with every square foot of the landscape pored over by both humans and computers to make sure that all-important real-world objects have been captured. This complete map gets loaded into the car's memory before a journey, and because it knows from the map about the location of many stationary objects, its computer—essentially a generic PC running Ubuntu Linux—can devote more of its energies to tracking moving objects, like other cars.

But the maps have problems, starting with the fact that the car can’t travel a single inch without one. Since maps are one of the engineering foundations of the Google car, before the company's vision for ubiquitous self-driving cars can be realized, all 4 million miles of U.S. public roads will be need to be mapped, plus driveways, off-road trails, and everywhere else you'd ever want to take the car. So far, only a few thousand miles of road have gotten the treatment, most of them around the company's headquarters in Mountain View, California.  The company frequently says that its car has driven more than 700,000 miles safely, but those are the same few thousand mapped miles, driven over and over again.

The common conception of self-driving cars is that they drive somewhat like humans do. That is, they look around at the road and make decisions on what their camera eyes “see.” I didn't realize that the cars were depending so heavily on pre-loaded maps.

I'm still excited about self-driving technology, but my expectations for the near to medium term have become much more modest. I once imagined I could just sit in the backseat of a car and have it drive me to any destination while I futzed around on my phone in the backseat. It's seems clear now that for the foreseeable future, someone always needs to be in the driver's seat, ready to take over at a moment's notice, and the millions of hours of additional leisure team that might be returned to society are not going to materialize this way. We so often throw around self-driving cars as if they're an inevitability, but it may be the last few problems to solve in that space are the most difficult ones to surmount. Thus Moravec's Paradox: it's “difficult or impossible to give [computers] the skills of a one-year-old when it comes to perception and mobility.”

What about an alternative approach, one that pairs humans with computers, a formula for many of the best solutions at this stage in history? What if the car could be driven by a remote driver, almost like a drone?

We live in a country where the government assumes anyone past the age of 16 who passes a driver test can drive for life. Having nearly been run over by many an elderly driver in Palo Alto during my commute to work, I'm not sure that's such a sound assumption. Still, if you are sitting in a car and don't have to drive it yourself, and as long as you get where you need to go, whether a computer does the driving or a remote pilot handles the wheel, you still get that time back.

How Apple Pay innovates on top of the US payments stack

One of the difficulties of building innovative payment services in the US is that we're a nation that loves our credit cards. We may totally overvalue the random rewards and points and other benefits the credit card companies give us, but since the merchants often cover a lot of the cost of those benefits, we'll take all the rewards we can get, as financially irrational as that may be.

[Aside: Some cards have annual fees, and the credit card users that carry a balance each month subsidize other users who pay off their balance in full each month, but many credit card rewards would be acquired more cheaply just by paying for them directly. A free lunch remains a rare thing.]

This presents a cost problem for payment startups: if you build a payment service that leverages your users' credit cards, you have to pay companies like Mastercard, Visa, American Express, and so on their fees. But if you charge merchants an additional markup on top of this fee when one of your users pays with your service, merchants have very little incentive to accept your payment method.

You could just pass through the fee, but then you have to make your profit elsewhere. Or you could be bold and charge less than the credit card fees, but then your entire business is a loss leader. The more you sell, the more you lose. Ask Square how that model has worked out from a cash flow perspective. Paypal was able to reverse the bleeding by making it near impossible for its users to pay with a credit card instead of with an eCheck or any Paypal balance they might be carrying. I know this because I tried to switch to using a credit card when logged into Paypal once and ended up in this strange endless loop where I kept adding a credit card to use and then not being able to find it. I tried several more times in a row until I decided to just take a rock and hit myself in the head repeatedly because it was less painful and frustrating.

An echeck costs Paypal some negligible amount, I recall it being something like $0.01 from my time at Amazon, and using the Paypal balance costs Paypal nothing, of course. That means whatever Paypal charges the merchant on that transaction becomes profit. It's not easy to get all the way there, though. You have to encourage enough usage for users to want to give you their checking account information so they can withdraw any balances they might have. Once you have that, you can enable money to flow the other direction, as an eCheck, too.

Payments, as a multi-sided market, will always present entrepreneurs with this chicken-egg conundrum. To get merchant adoption, you need a huge number of consumers carrying your payment method, but to get consumers to want to use your payment method over their beloved credit cards, you need a ton of merchants to accept that payment method.

Which brings me to Apple Pay, which launched today. I didn't realize today was the public launch until I was at Whole Foods buying breakfast this morning and saw this at the checkout counter on the payment terminal:

I hadn't downloaded the iOS 8.1 update yet or added any credit cards to Apple Pay on my phone, so I didn't use it just then, but I went back later after I had added a credit card and tried it out, and it was painless. Held my phone up to the terminal, Apple Pay popped up on my screen and asked me to verify with Touch ID, and that was it.

There are many reasons to think Apple Pay might succeed where so many other alternative payment methods have failed.

First, Apple is building off of the existing credit card system rather than fighting customer inertia. As noted above, so many US consumers love their credit cards and the rewards they get from them. Apple Pay doesn't ask them to give that up.

Another thing about credit cards: most consumers find them easy to use, not that much of a hassle to bring out and use. To surpass them, an alternative has to be as easy or easier to use or magnitudes for the consumer or much cheaper for the merchant. Apple Pay fulfills the first of those requirements once you've added the cards to Apple Pay on your phone, something that can be done as easily as snapping a photo of the credit card. Touch ID has finally reached an acceptable level of reliability, so the overall user experience is simple and solid.

Perhaps most importantly, Apple Pay starts on day one with a good whack at both the chicken and the egg. On the merchant side, Apple managed to corral an impressive number of partners, from credit card companies (the big 3 of Visa, Mastercard, and American Express) to banks (all the biggest like Bank of America, Chase, Citi, Wells Fargo). On the merchant front, Apple Pay's marketing page claims over 220,000 stores which actually makes this the weakest link of these three groups but still a decent starting point for day 1. That will be the hardest nut to crack, but more on this point later.

On the consumer side, whether that's the chicken or the egg, Apple has a massive and growing installed base of iOS and iPhone users who can use this system. The only other technology company I can think of who could tackle this space is Amazon given their huge database of consumer credit cards, but they lack the device or app installed base to fulfill on a good user experience for enough users.

Apple Pay offers some additional benefits, like added privacy, something Apple has been touting across the board as one of their key consumer benefits, though I'm still not sold it's a huge selling point for most average consumers. Still, it's worth noting that Apple doesn't keep records of your transactions, and you don't have to hand over your credit card to some waiter or clerk who then has access to the card number, expiration, and security code. Again, I think this is of minor psychological comfort for the vast majority of consumers, but it's at least not a negative.

But more than anything, my excitement for Apple Pay stemmed from this post by Uber:

The beauty of Apple Pay is that it simplifies Uber’s signup process to a single tap. If you have an eligible credit card already added to Apple Pay, you don’t need to enter it again to ride with Uber. Instead, merely place your finger on the Touch ID sensor of your iPhone 6 or iPhone 6 Plus, and your Uber is on its way. No forms, no fuss. We’re calling this new Uber feature Ride Now, and it’s the product of a close collaboration between Uber and Apple over the past few months.

How it works:

  1. Open the Uber app and tap Ride Now.
  2. Tap Set Pickup Location, enter your destination, and confirm your request.
  3. Place your finger on Touch ID to confirm payment, and your Uber is en route!

The rest of the Uber experience remains exactly the same. A receipt for your ride, with the fare breakdown and trip route, is sent to the email you have for Apple Pay; riders rate their drivers at the end of each trip. Existing Uber users are unaffected and can continue using Uber as before.

Innovation in payments in the US has been difficult because of the entrenched incumbent stack, but Apple has just moved up the stack and innovated above it all. What they've done here is abstracted away the credit card number entirely. Extrapolate out into the rosiest future, and perhaps someday the only time you might have to remember your credit card number is when you get it in the mail and input it into Apple Pay. Who really cares what the number, expiration date, and security code are. If you can prove you are who you say you are with Touch ID, that's a more efficient way to prove your identity and grant authorization for the payment.

But we're a long ways away from that day, and for now, 220,000 stores is actually not close to a majority of merchants. The Uber example, though, demonstrates the near-term potential.

I long ago memorized all my credit card numbers and security codes just so I wouldn't have to deal with the hassle of pulling the cards out every time I had to punch the number in for an online transaction. Still, it's a hassle, especially on my phone, to have to either enter my credit card details or to go round trip to 1Password to remember my crazy long, random, difficult-to-memorize iTunes password.

Apple Pay reduces that pain by a lot. A whole lot.

Even if its near term impact is restricted to making payments in apps on my phone, that's a big deal, and perhaps sufficient incentive to me to actually upgrade to one of the new iPads with Touch ID.

When Amazon first received its 1-click patent, one of the first and only companies to license the patent was Apple. It paid off for both sides. For Amazon, Apple's license strengthened the patent, allowing them to enforce it against companies like Barnes & Noble (for the record, I don't believe in software patents like these, but that's a topic for another day). For Apple, the 1-click license allowed them to enable users to purchase songs off of iTunes with 1-click, one part of a superior experience that spanned iPods and the iTunes music store that catapulted them to the digital music throne. Can you imagine the painful it would have been to go through multiple steps to purchase each single?

What Apple has done with Apple Pay is extend 1-click purchasing to the mobile app world and many real world stores as well. Or maybe we should call it 1-touch purchasing.

Years later when we look back on WWDC 2014, I suspect Apple Pay will be the most important announcement by a wide margin.

The last word

Interview with the great NYTimes obituary writer Margalit Fox:

Is it hard for you to navigate sources that are so regularly in a fragile state?

It behooves you, in purely human terms, to treat them as kindly as possible. That said, you don’t want to lull them into the sense that you’re a friend, an advocate, or some sort of a grief counselor. You do have people break down crying on the phone and you just wait patiently for them to regain composure.

In what ways do families try to control the narrative?

Families will say, Oh, be sure to put in that he died surrounded by his loved ones, or, Make sure you add that she touched the lives of everyone she knew. Those are things I never want to put in because they’re these Victorian clichés, but also because the obituary as a form has moved beyond protecting the family’s narrative.

How else has the form changed?

Well, for one thing, they’re a lot more fun to read. They used to be very formulaic.

Since they were considered boring, editors used to assign journalists obits as punishment. You knew someone was in trouble if they were chasing down obituaries.

That started to change with the great Alden Whitman, Mr. Bad News. He was famous for his advances. He’d do all this research and sit down with his subjects and they’d give him these very revealing interviews because they knew nothing would come out till they were gone. Douglas Martin, a colleague of mine who has been on obits longer than I have, started writing them in this charming, lively way, which has influenced my own style.

Our last few obits editors at the Times encouraged, where appropriate, a lighter, more features-style treatment. If you get one of these wonderful characters who took a different route to work one day in 1947 and invented something that changed the world, or one of these marvelous English eccentrics, there’s so much space to play. In the course of an obit, you’re charged with taking your subject from the cradle to the grave, which gives you a natural narrative arc.

Also, this:

I have maybe one suicide a year and they all seem to be poets. If I were an insurance company, I’d never write a policy for poets.

I have many favorite Fox obituaries. Here's just the opening paragraph of one example of her mastery of the form:

Helen Gurley Brown, Who Gave ‘Single Girl’ a Life in Full, Dies at 90

Helen Gurley Brown, who as the author of “Sex and the Single Girl” shocked early-1960s America with the news that unmarried women not only had sex but thoroughly enjoyed it — and who as the editor of Cosmopolitan magazine spent the next three decades telling those women precisely how to enjoy it even more — died on Monday in Manhattan. She was 90, though parts of her were considerably younger.

My first few years at Amazon, before the human editorial department was cut back, the books, music, and video editorial teams had what was called the Ghoul Pool, a list of famous authors, musicians, and movie professionals who were most likely to die; the editors split up the duties of pre-writing obits and curating lists of the most famous work for everyone on the list. 

We're probably just a decade or two away from a surge in the number of public social media accounts that will suddenly just end when their owners pass away. What procedures or etiquette will arise for handling those accounts? Will they be turned off after some period of inactivity, or will they just live on in perpetuity until the services themselves expire, voices gone silent?

We remember many people in history through their collected letters and things like that, but with each generation we will have greater artifacts of ever greater resolution on the deceased. Thousands of tweets and status updates; pictures of all the bowls of ramen they ate across decades of life, helpfully and artificially aged with digital filters; videos of random moments of life that fell in between other moments of life. The sound of their voice. The number of steps they walked each day of their lives. Checkins at all the bars they ended many a night, looking for love, usually finding the bottom of an empty glass instead.

I'd like to think that even if I were no longer alive to view the ads these technology services would try to show me that they'd keep my content up, as a digital archive of my life, warts and all, for my relatives to peruse. Maybe people will have to specify so in their wills, and maybe a large business will be built on preserving these digital footprints, an Internet archive of the deceased, like some digital cemetery.

When I die, I hope to leave behind a witty Gmail “permanently on vacation” message that leaves people with one last chuckle.

Network transportation costs

Metcalfe's Law states that a value of a network is proportional to the square of the number of its nodes.  In an area where good soils, mines, and forests are randomly distributed, the number of nodes valuable to an industrial economy is proportional to the area encompassed.  The number of such nodes that can be economically accessed is an inverse square of the cost per mile of transportation.  Combine this  with Metcalfe's Law and we reach a dramatic but solid mathematical conclusion: the potential value of a land transportation network is the inverse fourth power of the cost of that transportation. A reduction in transportation costs in a trade network by a factor of two increases the potential value of that network by a factor of sixteen. While a power of exactly 4.0 will usually be too high, due to redundancies, this does show how the cost of transportation can have a radical nonlinear impact on the value of the trade networks it enables.  This formalizes Adam Smith's observations: the division of labor (and thus value of an economy) increases with the extent of the market, and the extent of the market is heavily influenced by transportation costs (as he extensively discussed in his Wealth of Nations).

I learned that and more from this post on how critical horses were to the industrial revolution. Because Europe had horses to move natural resources while China relied on human porters, the 1800's saw Europe surge past China. Later, non-European countries like Japan just skipped the horses and went to steam engines to play another round of leapfrog.

We continue to see leapfrogging all over the world with a variety of technologies, like cellular technology (skipping landlines) and near field payments (hopping past credit cards). To take a more recent example, it would not surprise me if we first saw widespread deployment of drone delivery technology in countries other than the U.S., where regulations and solid alternatives exist. It's not surprising to hear that Amazon is looking to test drone delivery in India first.

Douchebag, the white racial slur

Entertaining exploration of the word douchebag by Michael Cohen. First, he dissects white privilege:

White privilege is the right of whites, and only whites, to be judged as individuals, to be treated as a unique self, possessed of all the rights and protections of citizenship. I am not a race, I am the unmarked subject. I am simply man, whereas you might be a black man, an asian woman, a disabled native man, a homosexual latina woman, and on and on the qualifiers of identification go. With each keyword added, so too does the burden of representation grow.

Sometimes the burden of representation is proudly shouldered, even celebrated. But more often this burden of representation becomes a dangerous, racist weight, crushing and unbearable. Michael Brown was killed in part because of this burden (the stereotype of black male criminality), and his body continues to carry this weight as the protests mount (the martyred symbol that black lives matter).

But white men are just people. Basic Humanity. We carry the absent mark which grants us the invisible power of white privilege. Everyone else gets discrimination.

Then he drills in on the very specific definition of the term:

If we think of the douchebag as a social identity as much as an accusation, as a subject with a distinctive persona locatable within the categories of race, class, gender and sexuality, then we find that the term carries a remarkably precise definition.

The douchebag is someone — overwhelmingly white, rich, heterosexual males — who insist upon, nay, demand their white male privilege in every possible set and setting.

The douchebag is equally douchy (that’s the adjectival version of the term) in public as in private. He is a douchebag waiting in line for coffee as well as in the bedroom. This definition marks him, like the atavistic, dusty rubber douchebags of our grandmothers’ generation, as a useless, sexist tool. Armed with this refined definition, I believe the term “douchebag” is the white racial slur we have all been waiting for. We have only to realize this. White privilege itself has blinded us to the true nature of the douchebag’s identity. But it’s been there all along.

It's most certainly a term I hear used more and more these days, though I'm not sure what precipitated its ascent. As Cohen notes, it was once a medical term, but at some point it was appropriated to mean, according to the first definition in its Urban Dictionary entry, “Someone who has surpassed the levels of jerk and asshole, however not yet reached fucker or motherfucker. Not to be confuzed [sic] with douche.”

I ran a Google Ngram on the term from 1880 to 2008, and you can see its popularity take flight in the aughts.