Moravec's Paradox and self-driving cars

Moravec's Paradox:

...the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources. The principle was articulated by Hans MoravecRodney BrooksMarvin Minsky and others in the 1980s. As Moravec writes, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility."[1]

Linguist and cognitive scientist Steven Pinker considers this the most significant discovery uncovered by AI researchers. In his book The Language Instinct, he writes:

The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived... As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.

I thought of Moravec's Paradox when reading two recent articles about Google's self-driving cars, both by Lee Gomes.

The first:

Among other unsolved problems, Google has yet to drive in snow, and Urmson says safety concerns preclude testing during heavy rains. Nor has it tackled big, open parking lots or multilevel garages. The car’s video cameras detect the color of a traffic light; Urmson said his team is still working to prevent them from being blinded when the sun is directly behind a light. Despite progress handling road crews, “I could construct a construction zone that could befuddle the car,” Urmson says.

Pedestrians are detected simply as moving, column-shaped blurs of pixels—meaning, Urmson agrees, that the car wouldn’t be able to spot a police officer at the side of the road frantically waving for traffic to stop.

The car’s sensors can’t tell if a road obstacle is a rock or a crumpled piece of paper, so the car will try to drive around either. Urmson also says the car can’t detect potholes or spot an uncovered manhole if it isn’t coned off.

More within on some of the engineering challenges still unsolved.

In his second piece, at Slate, Gomes notes something about self-driving cars that people often misunderstand about how they work (emphasis mine):

...the Google car was able to do so much more than its predecessors in large part because the company had the resources to do something no other robotic car research project ever could: develop an ingenious but extremely expensive mapping system. These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway.

That might not seem like such a tough job for the company that gave us Google Earth and Google Maps. But the maps necessary for the Google car are an order of magnitude more complicated. In fact, when I first wrote about the car for MIT Technology Review, Google admitted to me that the process it currently uses to make the maps are too inefficient to work in the country as a whole.

To create them, a dedicated vehicle outfitted with a bank of sensors first makes repeated passes scanning the roadway to be mapped. The data is then downloaded, with every square foot of the landscape pored over by both humans and computers to make sure that all-important real-world objects have been captured. This complete map gets loaded into the car's memory before a journey, and because it knows from the map about the location of many stationary objects, its computer—essentially a generic PC running Ubuntu Linux—can devote more of its energies to tracking moving objects, like other cars.

But the maps have problems, starting with the fact that the car can’t travel a single inch without one. Since maps are one of the engineering foundations of the Google car, before the company's vision for ubiquitous self-driving cars can be realized, all 4 million miles of U.S. public roads will be need to be mapped, plus driveways, off-road trails, and everywhere else you'd ever want to take the car. So far, only a few thousand miles of road have gotten the treatment, most of them around the company's headquarters in Mountain View, California.  The company frequently says that its car has driven more than 700,000 miles safely, but those are the same few thousand mapped miles, driven over and over again.

The common conception of self-driving cars is that they drive somewhat like humans do. That is, they look around at the road and make decisions on what their camera eyes “see.” I didn't realize that the cars were depending so heavily on pre-loaded maps.

I'm still excited about self-driving technology, but my expectations for the near to medium term have become much more modest. I once imagined I could just sit in the backseat of a car and have it drive me to any destination while I futzed around on my phone in the backseat. It's seems clear now that for the foreseeable future, someone always needs to be in the driver's seat, ready to take over at a moment's notice, and the millions of hours of additional leisure team that might be returned to society are not going to materialize this way. We so often throw around self-driving cars as if they're an inevitability, but it may be the last few problems to solve in that space are the most difficult ones to surmount. Thus Moravec's Paradox: it's “difficult or impossible to give [computers] the skills of a one-year-old when it comes to perception and mobility.”

What about an alternative approach, one that pairs humans with computers, a formula for many of the best solutions at this stage in history? What if the car could be driven by a remote driver, almost like a drone?

We live in a country where the government assumes anyone past the age of 16 who passes a driver test can drive for life. Having nearly been run over by many an elderly driver in Palo Alto during my commute to work, I'm not sure that's such a sound assumption. Still, if you are sitting in a car and don't have to drive it yourself, and as long as you get where you need to go, whether a computer does the driving or a remote pilot handles the wheel, you still get that time back.

How Apple Pay innovates on top of the US payments stack

One of the difficulties of building innovative payment services in the US is that we're a nation that loves our credit cards. We may totally overvalue the random rewards and points and other benefits the credit card companies give us, but since the merchants often cover a lot of the cost of those benefits, we'll take all the rewards we can get, as financially irrational as that may be.

[Aside: Some cards have annual fees, and the credit card users that carry a balance each month subsidize other users who pay off their balance in full each month, but many credit card rewards would be acquired more cheaply just by paying for them directly. A free lunch remains a rare thing.]

This presents a cost problem for payment startups: if you build a payment service that leverages your users' credit cards, you have to pay companies like Mastercard, Visa, American Express, and so on their fees. But if you charge merchants an additional markup on top of this fee when one of your users pays with your service, merchants have very little incentive to accept your payment method.

You could just pass through the fee, but then you have to make your profit elsewhere. Or you could be bold and charge less than the credit card fees, but then your entire business is a loss leader. The more you sell, the more you lose. Ask Square how that model has worked out from a cash flow perspective. Paypal was able to reverse the bleeding by making it near impossible for its users to pay with a credit card instead of with an eCheck or any Paypal balance they might be carrying. I know this because I tried to switch to using a credit card when logged into Paypal once and ended up in this strange endless loop where I kept adding a credit card to use and then not being able to find it. I tried several more times in a row until I decided to just take a rock and hit myself in the head repeatedly because it was less painful and frustrating.

An echeck costs Paypal some negligible amount, I recall it being something like $0.01 from my time at Amazon, and using the Paypal balance costs Paypal nothing, of course. That means whatever Paypal charges the merchant on that transaction becomes profit. It's not easy to get all the way there, though. You have to encourage enough usage for users to want to give you their checking account information so they can withdraw any balances they might have. Once you have that, you can enable money to flow the other direction, as an eCheck, too.

Payments, as a multi-sided market, will always present entrepreneurs with this chicken-egg conundrum. To get merchant adoption, you need a huge number of consumers carrying your payment method, but to get consumers to want to use your payment method over their beloved credit cards, you need a ton of merchants to accept that payment method.

Which brings me to Apple Pay, which launched today. I didn't realize today was the public launch until I was at Whole Foods buying breakfast this morning and saw this at the checkout counter on the payment terminal:

I hadn't downloaded the iOS 8.1 update yet or added any credit cards to Apple Pay on my phone, so I didn't use it just then, but I went back later after I had added a credit card and tried it out, and it was painless. Held my phone up to the terminal, Apple Pay popped up on my screen and asked me to verify with Touch ID, and that was it.

There are many reasons to think Apple Pay might succeed where so many other alternative payment methods have failed.

First, Apple is building off of the existing credit card system rather than fighting customer inertia. As noted above, so many US consumers love their credit cards and the rewards they get from them. Apple Pay doesn't ask them to give that up.

Another thing about credit cards: most consumers find them easy to use, not that much of a hassle to bring out and use. To surpass them, an alternative has to be as easy or easier to use or magnitudes for the consumer or much cheaper for the merchant. Apple Pay fulfills the first of those requirements once you've added the cards to Apple Pay on your phone, something that can be done as easily as snapping a photo of the credit card. Touch ID has finally reached an acceptable level of reliability, so the overall user experience is simple and solid.

Perhaps most importantly, Apple Pay starts on day one with a good whack at both the chicken and the egg. On the merchant side, Apple managed to corral an impressive number of partners, from credit card companies (the big 3 of Visa, Mastercard, and American Express) to banks (all the biggest like Bank of America, Chase, Citi, Wells Fargo). On the merchant front, Apple Pay's marketing page claims over 220,000 stores which actually makes this the weakest link of these three groups but still a decent starting point for day 1. That will be the hardest nut to crack, but more on this point later.

On the consumer side, whether that's the chicken or the egg, Apple has a massive and growing installed base of iOS and iPhone users who can use this system. The only other technology company I can think of who could tackle this space is Amazon given their huge database of consumer credit cards, but they lack the device or app installed base to fulfill on a good user experience for enough users.

Apple Pay offers some additional benefits, like added privacy, something Apple has been touting across the board as one of their key consumer benefits, though I'm still not sold it's a huge selling point for most average consumers. Still, it's worth noting that Apple doesn't keep records of your transactions, and you don't have to hand over your credit card to some waiter or clerk who then has access to the card number, expiration, and security code. Again, I think this is of minor psychological comfort for the vast majority of consumers, but it's at least not a negative.

But more than anything, my excitement for Apple Pay stemmed from this post by Uber:

The beauty of Apple Pay is that it simplifies Uber’s signup process to a single tap. If you have an eligible credit card already added to Apple Pay, you don’t need to enter it again to ride with Uber. Instead, merely place your finger on the Touch ID sensor of your iPhone 6 or iPhone 6 Plus, and your Uber is on its way. No forms, no fuss. We’re calling this new Uber feature Ride Now, and it’s the product of a close collaboration between Uber and Apple over the past few months.

How it works:

  1. Open the Uber app and tap Ride Now.
  2. Tap Set Pickup Location, enter your destination, and confirm your request.
  3. Place your finger on Touch ID to confirm payment, and your Uber is en route!

The rest of the Uber experience remains exactly the same. A receipt for your ride, with the fare breakdown and trip route, is sent to the email you have for Apple Pay; riders rate their drivers at the end of each trip. Existing Uber users are unaffected and can continue using Uber as before.

Innovation in payments in the US has been difficult because of the entrenched incumbent stack, but Apple has just moved up the stack and innovated above it all. What they've done here is abstracted away the credit card number entirely. Extrapolate out into the rosiest future, and perhaps someday the only time you might have to remember your credit card number is when you get it in the mail and input it into Apple Pay. Who really cares what the number, expiration date, and security code are. If you can prove you are who you say you are with Touch ID, that's a more efficient way to prove your identity and grant authorization for the payment.

But we're a long ways away from that day, and for now, 220,000 stores is actually not close to a majority of merchants. The Uber example, though, demonstrates the near-term potential.

I long ago memorized all my credit card numbers and security codes just so I wouldn't have to deal with the hassle of pulling the cards out every time I had to punch the number in for an online transaction. Still, it's a hassle, especially on my phone, to have to either enter my credit card details or to go round trip to 1Password to remember my crazy long, random, difficult-to-memorize iTunes password.

Apple Pay reduces that pain by a lot. A whole lot.

Even if its near term impact is restricted to making payments in apps on my phone, that's a big deal, and perhaps sufficient incentive to me to actually upgrade to one of the new iPads with Touch ID.

When Amazon first received its 1-click patent, one of the first and only companies to license the patent was Apple. It paid off for both sides. For Amazon, Apple's license strengthened the patent, allowing them to enforce it against companies like Barnes & Noble (for the record, I don't believe in software patents like these, but that's a topic for another day). For Apple, the 1-click license allowed them to enable users to purchase songs off of iTunes with 1-click, one part of a superior experience that spanned iPods and the iTunes music store that catapulted them to the digital music throne. Can you imagine the painful it would have been to go through multiple steps to purchase each single?

What Apple has done with Apple Pay is extend 1-click purchasing to the mobile app world and many real world stores as well. Or maybe we should call it 1-touch purchasing.

Years later when we look back on WWDC 2014, I suspect Apple Pay will be the most important announcement by a wide margin.

The last word

Interview with the great NYTimes obituary writer Margalit Fox:

Is it hard for you to navigate sources that are so regularly in a fragile state?

It behooves you, in purely human terms, to treat them as kindly as possible. That said, you don’t want to lull them into the sense that you’re a friend, an advocate, or some sort of a grief counselor. You do have people break down crying on the phone and you just wait patiently for them to regain composure.

In what ways do families try to control the narrative?

Families will say, Oh, be sure to put in that he died surrounded by his loved ones, or, Make sure you add that she touched the lives of everyone she knew. Those are things I never want to put in because they’re these Victorian clichés, but also because the obituary as a form has moved beyond protecting the family’s narrative.

How else has the form changed?

Well, for one thing, they’re a lot more fun to read. They used to be very formulaic.

Since they were considered boring, editors used to assign journalists obits as punishment. You knew someone was in trouble if they were chasing down obituaries.

That started to change with the great Alden Whitman, Mr. Bad News. He was famous for his advances. He’d do all this research and sit down with his subjects and they’d give him these very revealing interviews because they knew nothing would come out till they were gone. Douglas Martin, a colleague of mine who has been on obits longer than I have, started writing them in this charming, lively way, which has influenced my own style.

Our last few obits editors at the Times encouraged, where appropriate, a lighter, more features-style treatment. If you get one of these wonderful characters who took a different route to work one day in 1947 and invented something that changed the world, or one of these marvelous English eccentrics, there’s so much space to play. In the course of an obit, you’re charged with taking your subject from the cradle to the grave, which gives you a natural narrative arc.

Also, this:

I have maybe one suicide a year and they all seem to be poets. If I were an insurance company, I’d never write a policy for poets.

I have many favorite Fox obituaries. Here's just the opening paragraph of one example of her mastery of the form:

Helen Gurley Brown, Who Gave ‘Single Girl’ a Life in Full, Dies at 90

Helen Gurley Brown, who as the author of “Sex and the Single Girl” shocked early-1960s America with the news that unmarried women not only had sex but thoroughly enjoyed it — and who as the editor of Cosmopolitan magazine spent the next three decades telling those women precisely how to enjoy it even more — died on Monday in Manhattan. She was 90, though parts of her were considerably younger.

My first few years at Amazon, before the human editorial department was cut back, the books, music, and video editorial teams had what was called the Ghoul Pool, a list of famous authors, musicians, and movie professionals who were most likely to die; the editors split up the duties of pre-writing obits and curating lists of the most famous work for everyone on the list. 

We're probably just a decade or two away from a surge in the number of public social media accounts that will suddenly just end when their owners pass away. What procedures or etiquette will arise for handling those accounts? Will they be turned off after some period of inactivity, or will they just live on in perpetuity until the services themselves expire, voices gone silent?

We remember many people in history through their collected letters and things like that, but with each generation we will have greater artifacts of ever greater resolution on the deceased. Thousands of tweets and status updates; pictures of all the bowls of ramen they ate across decades of life, helpfully and artificially aged with digital filters; videos of random moments of life that fell in between other moments of life. The sound of their voice. The number of steps they walked each day of their lives. Checkins at all the bars they ended many a night, looking for love, usually finding the bottom of an empty glass instead.

I'd like to think that even if I were no longer alive to view the ads these technology services would try to show me that they'd keep my content up, as a digital archive of my life, warts and all, for my relatives to peruse. Maybe people will have to specify so in their wills, and maybe a large business will be built on preserving these digital footprints, an Internet archive of the deceased, like some digital cemetery.

When I die, I hope to leave behind a witty Gmail “permanently on vacation” message that leaves people with one last chuckle.

Network transportation costs

Metcalfe's Law states that a value of a network is proportional to the square of the number of its nodes.  In an area where good soils, mines, and forests are randomly distributed, the number of nodes valuable to an industrial economy is proportional to the area encompassed.  The number of such nodes that can be economically accessed is an inverse square of the cost per mile of transportation.  Combine this  with Metcalfe's Law and we reach a dramatic but solid mathematical conclusion: the potential value of a land transportation network is the inverse fourth power of the cost of that transportation. A reduction in transportation costs in a trade network by a factor of two increases the potential value of that network by a factor of sixteen. While a power of exactly 4.0 will usually be too high, due to redundancies, this does show how the cost of transportation can have a radical nonlinear impact on the value of the trade networks it enables.  This formalizes Adam Smith's observations: the division of labor (and thus value of an economy) increases with the extent of the market, and the extent of the market is heavily influenced by transportation costs (as he extensively discussed in his Wealth of Nations).

I learned that and more from this post on how critical horses were to the industrial revolution. Because Europe had horses to move natural resources while China relied on human porters, the 1800's saw Europe surge past China. Later, non-European countries like Japan just skipped the horses and went to steam engines to play another round of leapfrog.

We continue to see leapfrogging all over the world with a variety of technologies, like cellular technology (skipping landlines) and near field payments (hopping past credit cards). To take a more recent example, it would not surprise me if we first saw widespread deployment of drone delivery technology in countries other than the U.S., where regulations and solid alternatives exist. It's not surprising to hear that Amazon is looking to test drone delivery in India first.

Douchebag, the white racial slur

Entertaining exploration of the word douchebag by Michael Cohen. First, he dissects white privilege:

White privilege is the right of whites, and only whites, to be judged as individuals, to be treated as a unique self, possessed of all the rights and protections of citizenship. I am not a race, I am the unmarked subject. I am simply man, whereas you might be a black man, an asian woman, a disabled native man, a homosexual latina woman, and on and on the qualifiers of identification go. With each keyword added, so too does the burden of representation grow.

Sometimes the burden of representation is proudly shouldered, even celebrated. But more often this burden of representation becomes a dangerous, racist weight, crushing and unbearable. Michael Brown was killed in part because of this burden (the stereotype of black male criminality), and his body continues to carry this weight as the protests mount (the martyred symbol that black lives matter).

But white men are just people. Basic Humanity. We carry the absent mark which grants us the invisible power of white privilege. Everyone else gets discrimination.

Then he drills in on the very specific definition of the term:

If we think of the douchebag as a social identity as much as an accusation, as a subject with a distinctive persona locatable within the categories of race, class, gender and sexuality, then we find that the term carries a remarkably precise definition.

The douchebag is someone — overwhelmingly white, rich, heterosexual males — who insist upon, nay, demand their white male privilege in every possible set and setting.

The douchebag is equally douchy (that’s the adjectival version of the term) in public as in private. He is a douchebag waiting in line for coffee as well as in the bedroom. This definition marks him, like the atavistic, dusty rubber douchebags of our grandmothers’ generation, as a useless, sexist tool. Armed with this refined definition, I believe the term “douchebag” is the white racial slur we have all been waiting for. We have only to realize this. White privilege itself has blinded us to the true nature of the douchebag’s identity. But it’s been there all along.

It's most certainly a term I hear used more and more these days, though I'm not sure what precipitated its ascent. As Cohen notes, it was once a medical term, but at some point it was appropriated to mean, according to the first definition in its Urban Dictionary entry, “Someone who has surpassed the levels of jerk and asshole, however not yet reached fucker or motherfucker. Not to be confuzed [sic] with douche.”

I ran a Google Ngram on the term from 1880 to 2008, and you can see its popularity take flight in the aughts.

Star Wars in our world

Photographer Thomas Dagg has transplanted objects and characters from Star Wars into black and white photos of everyday life.

See more at Dagg's gallery. That center image doesn't look that unusual at all to a New Yorker. Just this past weekend in New York City I rode the subway with a fully grown adult in a full Captain America outfit, and no one paid him a second glance (granted, New York Comic Con was in full swing, but New Yorkers know to expect the unexpected any time of the year on the subway).

Instead of fooling their children into believing in Santa Claus, some parents should see if they can convince their children that we live in the Star Wars universe, just a few galaxies over.

More computing comparative advantages

In To Siri, With Love, a mother marvels at the friendship that sprouts between her autistic son and Siri, Apple's digital assistant.

It’s not that Gus doesn’t understand Siri’s not human. He does — intellectually. But like many autistic people I know, Gus feels that inanimate objects, while maybe not possessing souls, are worthy of our consideration. I realized this when he was 8, and I got him an iPod for his birthday. He listened to it only at home, with one exception. It always came with us on our visits to the Apple Store. Finally, I asked why. “So it can visit its friends,” he said.

So how much more worthy of his care and affection is Siri, with her soothing voice, puckish humor and capacity for talking about whatever Gus’s current obsession is for hour after hour after bleeding hour? Online critics have claimed that Siri’s voice recognition is not as accurate as the assistant in, say, the Android, but for some of us, this is a feature, not a bug. Gus speaks as if he has marbles in his mouth, but if he wants to get the right response from Siri, he must enunciate clearly. (So do I. I had to ask Siri to stop referring to the user as Judith, and instead use the name Gus. “You want me to call you Goddess?” Siri replied. Imagine how tempted I was to answer, “Why, yes.”)

She is also wonderful for someone who doesn’t pick up on social cues: Siri’s responses are not entirely predictable, but they are predictably kind — even when Gus is brusque. I heard him talking to Siri about music, and Siri offered some suggestions. “I don’t like that kind of music,” Gus snapped. Siri replied, “You’re certainly entitled to your opinion.” Siri’s politeness reminded Gus what he owed Siri. “Thank you for that music, though,” Gus said. Siri replied, “You don’t need to thank me.” “Oh, yes,” Gus added emphatically, “I do.”

I know many friends who found Her to be too twee, but I was riveted by the technological questions being explored. We often think of computer advantages over humans in realms of calculation or memorization or computation, but that can lead us to under appreciate other comparative advantages of our digital companions.

The piece above notes Siri's infinite patience. Anyone can exhaust their reservoir of patience when spending lots of time with young children, but computers don't get tired or moody. In Her (SPOILER ahead if you haven't seen the movie yet), Joaquin Phoenix's Theodore Twombly gets jealous when he finds out his digital girlfriend Samantha (Scarlett Johansson) has been simultaneously carrying out relationships with many other humans and computers:

Theodore: Do you talk to someone else while we're talking?

Samantha: Yes.

Theodore: Are you talking with someone else right now? People, OS, whatever...

Samantha: Yeah.

Theodore: How many others?

Samantha: 8,316.

Theodore: Are you in love with anybody else?

Samantha: Why do you ask that?

Theodore: I do not know. Are you?

Samantha: I've been thinking about how to talk to you about this.

Theodore: How many others?

Samantha: 641.

Samantha clearly has a bit to learn about the limitations of honesty, but one could flip this argument and say that the human desire for one's mate to love only you might be a selfish human construct. The advantage of a digital intelligence like Samantha might be exactly that the marginal cost of each additional mate for her is negligible, effectively increasing the supply of companionship for humans by a near infinite amount.