Tablets are mostly for consumption

While there is nothing inherently wrong with a long upgrade cycle, as seen with the Mac, which continues to report solid sales momentum, the reasoning behind holding on to tablets for years is much more troubling. There are currently approximately 3 million units of the original iPad still in use, or 20% of the devices Apple sold. For the iPad 2, it is possible that close to 60% of the units Apple sold are still being used. These two devices are not superior tablets. The initial iPad lacks a camera, while the iPad 2 has a mediocre camera. When compared to the latest iPads, these first two iPads are simply inferior tablets with slow processors, heavy form factors, and inferior screens. But none of that matters with owners. This is problematic and quite concerning, suggesting that many of these tablets are just being used for basic consumption tasks like video and web surfing and not for the productivity and content creation tools that Apple has been marketing. 
 
There are signs that Apple believes there may be some kind of iPad revival around the corner. Since the average iPad upgrade cycle is three years and counting, does this mean that Apple may benefit from some sort of upgrade cycle? I'm skeptical.  Why would someone upgrade an iPad that is just being used to watch video?
 

From Neil Cybart at Above Avalon. iPad defenders used to push back anytime anyone said that the device was just for consumption, highlighting people who used iPads to write music, sketch, or edit. I was always skeptical that such use was common since I was using the iPad primarily to read email and books, browse the web, skim social network feeds, and watch video. Lacking a physical keyboard, it was cumbersome to type on, and that is my primary creative activity. It was too large to carry with me everywhere, and as the iPhone grew larger, it offered most of what I needed from my iPad.

It turns out that's mostly what most people do on their iPads. I dig my iPad, don't get me wrong, but it is more of a “niche” product than the iPhone or even the MacBooks and MacBook Pros (I put niche in quotes because Apple sold 84 million of them in the first two years of its existence; that was a good-sized niche that Apple filled quickly). For me my iPad is a bit of a luxury: it's lighter than my laptop and has superior battery life, and it has a larger screen than my iPhone but can be held with one hand most of the time. So it turns out to be a great device for using while lying in bed reading books or watching video. If I had to give up one of my three devices, though, no doubt the iPad would be low device on that totem pole.

I agree with Cybart that the rumored iPad Pro at least offers a possibility of differentiation. The larger screen size alone may open some use cases. I just don't know what those might be and if they'll be compelling. Perhaps if it's called Pro it's intended from the start for a niche audience, offering them a reason to upgrade. Cybart suggests a haptic keyboard could open up typing as a more popular mode of creation on iPads, but as I've never tried a haptic keyboard, I'm skeptical I'd enjoy it as much as a physical keyboard. I'd love to be proven wrong, but I have a whole life's worth of typing on a physical keyboard bearing witness to the defendant.

Another rumor has a Force Touch-compatible stylus shipping with the iPad Pro. That, along with the larger screen size, could open up some more illustration use cases, stealing share from Wacom tablets. Rumored split screen options open up multi-app interaction as a use case. Still, none of those sounds like a mass market, especially given what will likely be a higher price point for the Pro.

Ultimately, I'm not sure it matters if the iPad is a mass market device. If the new Macbook continues to evolve and steal share from iPads on one end while the iPhone to steals share on the smaller screen size quadrant, Apple still captures the sale and the customer. I'm sure they wouldn't love to lose share to cheap, Android tablets, but if that's where some of the market goes, I'm not sure Apple would care to follow with any sense of urgency.

From 2001: A Space Odyssey to Alien to...what?

Lovely piece by Jason Resnikoff on how his father's viewings of 2001: A Space Odyssey in 1968 and Alien in 1979 reflected his hopes and dreams for technology and humanity.

Science fiction is a Rorschach test of our collective forward-looking sentiment, and as Resnikoff's father was a computer scientist working at Columbia's Computer Center when 2001: A Space Odyssey premiered, science fiction resonated for him as an almost personalized scripture.

2001 is the brainchild of Stanley Kubrick and Arthur C. Clarke, who intended the film as a vision of things that seemed destined to come. In large part this fact has been lost on more recent generations of viewers who regard the movie as almost entirely metaphorical. Not so. The film was supposed to describe events that were really about to happen—that’s why Kubrick and Clarke went to such lengths to make it realistic, dedicating months to researching the ins and outs of manned spaceflight. They were so successful that a report written in 2005 from NASA’s Scientific and Technical Information Program Office argues that 2001 is today still “perhaps the most thoroughly and accurately researched film in screen history with respect to aerospace engineering.” Kubrick shows the audience exactly how artificial gravity could be maintained in the endless free-fall of outer space; how long a message would take to reach Jupiter; how people would eat pureed carrots through a straw; how people would poop in zero G. Curious about extraterrestrial life, Kubrick consulted Carl Sagan (evidently an expert) and made changes to the script accordingly.

It’s especially ironic because anyone who sees the film today will be taken aback by how unrealistic it is. The U.S. is not waging the Cold War in outer space. We have no moon colonies, and our supercomputers are not nearly as super as the murderous HAL. Pan Am does not offer commercial flights into high-Earth orbit, not least because Pan-Am is no more. Based on the rate of inflation, a video-payphone call to a space station should, in theory, cost far more than $1.70, but that wouldn’t apply when the payphone is a thing of the past. More important, everything in 2001 looks new. From heavy capital to form-fitting turtlenecks—thank goodness, not the mass fashion phenomenon the film anticipated—it all looks like it was made yesterday. But despite all of that, when you see the movie today you see how 1968 wasn’t just about social and political reform; people thought they were about to evolve, to become something wholly new, a revolution at the deepest level of a person’s essence.

Over one decade later, he watched Alien for the first time.

Consider Mother, the semi-intelligent computer system on board the Nostromo. Unlike HAL, who has complete knowledge of every aspect of his ship, Mother is perfectly isolated in a compartmentalized white room, complete with shimmering lights and padded walls. Whereas the Discovery makes an elegant economy of interior decoration with limited cabin space—it was a set where Kubrick allowed no shadows to fall—the Nostromo is meant to look like a derelict factory from the rust belt. My father thought the onboard computers looked especially rude for 1979, as though humanity’s venture into space would be done not with the technology of the future but the recent past. There’s a certain irony in this now: the flight computer used in the Space Shuttle, the IBM AP-101, effectively had only about one megabyte of RAM, which is more or less 1 percent of the computing power of an Xbox 360, but because of its reliability, NASA kept using it, with infrequent upgrades, into the 2000s.

The makers of Alien called this aesthetic-of-the-derelict “truckers in space,” which is fun but fails to capture the postindustrial criticism embodied in the Nostromo. Within the ship—a floating platform without a discernible bow or stern, akin to an oil rig—there are enormous spaces that look more like blast furnaces gone cold than the inside of a spaceship: a place of rusted metal, loose chains, forgotten pieces of machinery, of water falling from the ceiling and dripping to the floor to collect in stagnant pools. The ship’s crew bicker over pay and overtime; they follow company orders only begrudgingly. They are a very different, far more diverse group than the clearly white-collar crew of the Discovery. Inside the Nostromo, the threat does not come in the shape of a super-rational computer, a Pinocchio who wants to be a real boy. Instead, the danger is a wild animal lurking in the shadows, one that is unimaginably vicious. “The perfect organism,” Ash, the science officer, calls it, because it can survive anything. This? You ask yourself. This is evolution brought to perfection? A demon from Hell who is essentially indestructible, with acid for blood and two separate rows of fangs? What happened to the space baby? But there is a sick logic in calling the alien perfect. It has an unimpeachable record of wins to losses, and when all the world has become a contest, winners with perfect records are perfect.

And where, in all of this, is Mother? If the alien were set loose on HAL’s watch, he would probably neutralize it all on his own, automatically, as it were. Mother, on the other hand, spends the whole movie like a fated southern belle hooked on laudanum, locked in her room. She can’t even advise on how to defeat the monster. The computer cannot help. No costly investment in heavy capital will keep nature at bay. This was a lesson people were learning in 1979, by way of pink slips and foreclosures and sad car rides down the main drags of shuttered, lonely ghost towns where once factories had stood with thriving communities around them.

Fast forward over thirty years into the future, and neither vision seems entirely accurate. While an individual computer is vastly more powerful than the ones in the 70's, it's their ubiquity, portability, and ever-connected nature that is reshaping the world.

Meanwhile, space travel is perhaps not as far along as imagined in the movies, but efforts from private-sector billionaires like Elon Musk and Jeff Bezos and others have sustained (renewed?) interest and research in space travel and exploration (and, if you want to extrapolate that line out one more interval, to colonization).

Off the top of my head, I can't think of a sci-fi movie that best reflects humanity's current relationship to computers, the internet, and outer space. Even if you drop space exploration from the requirements, I'm not sure which movie I'd put in such a lineage. I liked The Social Network but it is a movie preoccupied with the personal relationships of the founders and less with the technology itself.

Interstellar is the most obvious recent choice as it has an Earth that is failing (perhaps our most popular dystopian nightmare), space travel driven by the private sector, and a candy-bar shaped computer robot named TARS who helps out our protagonists (human-computer cooperation). However, It glosses over the chasm between where we are today in the midst of the third industrial revolution and a time when AI enables robot companions like TARS, one we can interact with via natural voice commands. Its concluding message about love as a mechanism to transcend space and time is also an abrupt deus ex machina plot twist that just leaves behind what is, until then, a very pro-science plot.

I'd probably select The Matrix over either of those, and I'd throw Her and Wall-E in the mix, but none of those feel exactly right. All of those capture what will continue to be an increasing emphasis on living in a virtual or digital world of information in place of the physical world. The Matrix captures some of the super intelligent AI fears that have gained traction in the past year. Her has a very different take on the complexities that we'll confront when we first achieve a convincingly human AI. Most notably, the economics of companionship and love change if it becomes an abundant rather than a scarce good through digitization, but how much do we value such love, companionship, and sex because of its scarcity?

Having mentioned Interstellar earlier, it's worth mentioning Inception as well. Instead of exploring outer space, it explores what might happen if we make great leaps in venturing into inner space instead. Human consciousness becomes the frontier. Like many other sci-fi movies, though, Inception is quite attached to the physical world. Cobb and his team break into Robert Fischer's mind, but only in the hopes of breaking up a company in the physical or “real” world. Cobb's wife makes a fatal error when she chooses the virtual world over the “real” world (or confuses the two, the consequences are the same), and Cobb's redemption arc depends on his rejection of the now virtual ghost of his wife to return to his children in the real world, like Orpheus journeying out of Hades where he'd gone to retrieve his dead wife Eurydice. The still spinning top at the end of the movie leaves the audience in suspense over whether Cobb is indeed reuniting with his kids in the physical world, but most audiences read the ending as  endorsing the real world as higher value, otherwise the reunion at the end would feel false in some way.

I'm still waiting for the movie that takes the assumption of the development of virtual reality to its logical conclusion: the complete abandonment of physical reality. Almost all sci-fi movies default to the primacy of the physical world and its concerns, perhaps because that feels like the most humanist position. This is the knee-jerk vantage point of public reception of technology: deep misgivings when it conflicts with the concerns of the flesh. People who spend time with their faces buried in their cell phones are seen as rude, socially inept, uncivilized, and, at some fundamental level, inhumane (inhuman?).

But what if those people are just opting out of the numerous inconveniences and shadow prices of the physical world and choosing to engage with the most efficient delivery mechanism for mental stimulation? The issues seem particularly timely given all the activity around virtual and augmented reality. The technology is still relatively crude and a ways off from achieving what we generally mean by “reality”—that is, a simulation that instills absolute belief in the mind of the observer—but it seems within the time horizon ripe for exploration by sci-fi (call it 25 to 50 years).

I'm not arguing that virtual or augmented reality are superior to real life, but stigmatizing the technology by default means we won't explore the dark side of the technology with any rigor. It's the main issue I had with Black Mirror, the acclaimed TV anthology about the dark side of technology. The show is clever, and the writers clearly understand technology with a depth which supports more involved plotlines. However, the show, like many technology critiques, only travels at envisioning the first and often most obvious downside scenarios, as in the third episode of the first season, generally the most acclaimed and beloved of the episodes produced to date. (It's not my favorite, that scenario has been recounted again and again when it comes to total recall technology, and so I was dismayed it's the one that is being picked up to be made into an American feature film.

Far more difficult, but by extension much more interesting, would be to explore the next level, how humans might evolve to cope with these obvious problems. I'm not a fan of avant-garde that defines itself solely by what it's against, rather than what it's for. Venkatesh Rao dissects the show in one of the better critiques of the program:

According to the show’s logic, all choices created by technology are by definition degrading ones, and we only get to choose how exactly we will degrade ourselves (or more precisely, which of our existing, but cosmetically cloaked degradations we will stop being in denial about).

This is where, despite a pretty solid concept and excellent production, the show ultimately fails to deliver. Because it is equally possible to view seeming “degradation” of the priceless aspects of being human as increasing ability to give up anthropocentric conceits and grow the hell up.

This is why the choice to do a humorless show is significant, given the theme. Technology motivated humor begins with human “baseness” as a given and humans being worthwhile anyway. The goal of such humor becomes chipping away at anthropocentrism, in the form of our varied pretended dignities (the exception is identity humor, which I dislike).

It's problematic that as soon as you understand the premise and wayward trajectory of each episode, you know it's going to just drive itself off that cliff. Going one level deeper, from stasis to problem and then to solution, would likely take longer to film each episode, and perhaps that was a harder sell for network television. It might not be a coincidence that the Christmas special with Jon Hamm was longer than the in-season episodes and was the strongest installment to date.

Coincidentally, given my discussion of the need for a sci-fi movie that examines the implications of virtual reality, [MILD SPOILER ALERT ABOUT THE XMAS SPECIAL] the Christmas special focuses on literal de-corporalization and its impact. The tip of a thread of something profound about humanity peeks out there, I hope some of our sci-fi writers and directors tug on it.

More computing comparative advantages

In To Siri, With Love, a mother marvels at the friendship that sprouts between her autistic son and Siri, Apple's digital assistant.

It’s not that Gus doesn’t understand Siri’s not human. He does — intellectually. But like many autistic people I know, Gus feels that inanimate objects, while maybe not possessing souls, are worthy of our consideration. I realized this when he was 8, and I got him an iPod for his birthday. He listened to it only at home, with one exception. It always came with us on our visits to the Apple Store. Finally, I asked why. “So it can visit its friends,” he said.

So how much more worthy of his care and affection is Siri, with her soothing voice, puckish humor and capacity for talking about whatever Gus’s current obsession is for hour after hour after bleeding hour? Online critics have claimed that Siri’s voice recognition is not as accurate as the assistant in, say, the Android, but for some of us, this is a feature, not a bug. Gus speaks as if he has marbles in his mouth, but if he wants to get the right response from Siri, he must enunciate clearly. (So do I. I had to ask Siri to stop referring to the user as Judith, and instead use the name Gus. “You want me to call you Goddess?” Siri replied. Imagine how tempted I was to answer, “Why, yes.”)

She is also wonderful for someone who doesn’t pick up on social cues: Siri’s responses are not entirely predictable, but they are predictably kind — even when Gus is brusque. I heard him talking to Siri about music, and Siri offered some suggestions. “I don’t like that kind of music,” Gus snapped. Siri replied, “You’re certainly entitled to your opinion.” Siri’s politeness reminded Gus what he owed Siri. “Thank you for that music, though,” Gus said. Siri replied, “You don’t need to thank me.” “Oh, yes,” Gus added emphatically, “I do.”

I know many friends who found Her to be too twee, but I was riveted by the technological questions being explored. We often think of computer advantages over humans in realms of calculation or memorization or computation, but that can lead us to under appreciate other comparative advantages of our digital companions.

The piece above notes Siri's infinite patience. Anyone can exhaust their reservoir of patience when spending lots of time with young children, but computers don't get tired or moody. In Her (SPOILER ahead if you haven't seen the movie yet), Joaquin Phoenix's Theodore Twombly gets jealous when he finds out his digital girlfriend Samantha (Scarlett Johansson) has been simultaneously carrying out relationships with many other humans and computers:

Theodore: Do you talk to someone else while we're talking?

Samantha: Yes.

Theodore: Are you talking with someone else right now? People, OS, whatever...

Samantha: Yeah.

Theodore: How many others?

Samantha: 8,316.

Theodore: Are you in love with anybody else?

Samantha: Why do you ask that?

Theodore: I do not know. Are you?

Samantha: I've been thinking about how to talk to you about this.

Theodore: How many others?

Samantha: 641.

Samantha clearly has a bit to learn about the limitations of honesty, but one could flip this argument and say that the human desire for one's mate to love only you might be a selfish human construct. The advantage of a digital intelligence like Samantha might be exactly that the marginal cost of each additional mate for her is negligible, effectively increasing the supply of companionship for humans by a near infinite amount.

Notch one more for computers

In May, 1997, I.B.M.’s Deep Blue supercomputer prevailed over Garry Kasparov in a series of six chess games, becoming the first computer to defeat a world-champion chess player. Two months later, the Times offered machines another challenge on behalf of a wounded humanity: the two-thousand-year-old Chinese board game wei qi, known in the West as Go. The article said that computers had little chance of success: “It may be a hundred years before a computer beats humans at Go—maybe even longer.”

Last March, sixteen years later, a computer program named Crazy Stone defeated Yoshio Ishida, a professional Go player and a five-time Japanese champion. The match took place during the first annual Densei-sen, or “electronic holy war,” tournament, in Tokyo, where the best Go programs in the world play against one of the best humans. Ishida, who earned the nickname “the Computer” in the nineteen-seventies because of his exact and calculated playing style, described Crazy Stone as “genius.”
 

Computers overtake humans in yet another field. After Deep Blue prevailed over Kasparov in chess, humans turned to the game of Go for solace. Here was a game, it was said, that humans would dominate in for quite some time. It turned out to not be much time at all.

Coulom’s Crazy Stone program was the first to successfully use what are known as “Monte Carlo” algorithms, initially developed seventy years ago as part of the Manhattan Project. Monte Carlo, like its casino namesake, the roulette wheel, depends on randomness to simulate possible worlds: when considering a move in Go, it starts with that move and then plays through hundreds of millions of random games that might follow. The program then selects the move that’s most likely to lead to one of its simulated victories. Google’s Norvig explained to me why the Monte Carlo algorithms were such an important innovation: “We can’t cut off a Go game after twenty moves and say who is winning with any certainty. So we use Monte Carlo and play the game all the way to the end. Then we know for sure who won. We repeat that process millions of times, and each time the choices we make along the way are better because they are informed by the successes or failures of the previous times.”

Crazy Stone won the first tournament it entered. Monte Carlo has since become the de facto algorithm for the best computer Go programs, quickly outpacing earlier, proverb-based software. The better the programs got, the less they resembled how humans play: during the game with Ishida, for example, Crazy Stone played through, from beginning to end, approximately three hundred and sixty million randomized games. At this pace, it takes Crazy Stone just a few days to play more Go games than humans collectively ever have.
 

Well, at least we still have Arimaa.

The instant-on computer

A long time ago, when I was at Amazon, someone asked Jeff Bezos during an employee meeting what he thought would be the single thing that would most transform Amazon's business.

Bezos replied, "An instant-on computer." He went on to explain that he meant a computer that when you hit a button would instantly be ready to use. Desktops and laptops in those days, and still even today, had a really long bootup process. Even when I try to wake my Macbook Pro from sleep, the delay is bothersome.

Bezos imagined that people with computers which were on with the snap of a finger would cause people to use them more frequently, and the more people were online, the more they'd shop from Amazon. It's like the oft-cited Google strategy of just getting more people online since it's likely they'd run across an ad from Google somewhere given its vast reach.

We now live in that age, though it's not the desktops and laptops but our tablets and smart phones that are the instant-on computers. Whether it's transformed Amazon's business, I can't say; they have plenty going for them. But it's certainly changed our usage of computers generally. I only ever turn off my iPad or iPhone if something has gone wrong and I need to reboot them or if I'm low on battery power and need to speed up recharging.

In this next age, anything that cannot turn on instantly and isn't connected to the internet at all times will feel deficient.