Robots taking all the jobs, cont.

By studying the brains of drivers when they were negotiating a race-track, the scientists were intrigued to find that during the most complex tasks, the experts used less brain power. They appeared to be acting on instinct and muscle memory rather than using judgement as a computer programme would. 

“It looks as if the skilled race car drivers are able to control their cars with very little cognitive load,” said Prof Gerdes. 

Mr Vodden agreed saying in difficult manouvres experience kicked in. "If you're thinking you're going too slow."

You'd think from that excerpt that the human driver remains superior, but it turns out the driverless car beat the track champion by 0.4 seconds on a track in Northern California.

One race track, the worlds' greatest driver (whoever is the Michael Schumacher of the moment) versus the best computer driver. I don't enjoy watching auto racing on TV, but I'd watch one that pits man and machine against machine and machine.

One more wrinkle for AI to learn: how and when to cheat.

In the race between Shelley and Mr Vodden, the racing driver left the track at a sharp corner, rejoining the race ahead of the robot car. 

“What we’re doing as humans we’re weighting a number of different things,” added Prof Gerdes. 

“We’re not driving within the lines, we’re balancing our desire to follow the law with other things such as our desire for mobility and safety. 

“If we really want to get to the point where we can have a car that will drive as well as the very best drivers with the car control skills and also the judgment it seems to me that we really need to have a societal discussion about what are the different priorities we place on mobility and safety on judgement and following the law.”

Black cards, love, lies, and Force Majeure

Speaking of Black Mirror, here's a relevant interview titled Black Cards: All the Lies You Need to Love. A wife interviews her husband after he publishes the book Love and Lies: An Essay on Truthfulness, Deceit, and the Growth and Care of Erotic Love.

Recall what Venkatesh Rao said about such lies we tell ourselves and each other in his critique of Black Mirror:

In each case, the technological driver has to do with information  — either knowing too much or too little about yourself and/or others.  Each technological premise can be boiled down to what if you knew everything about X or what if you could know nothing about X. In the episodes so far, there has been no simple correlation between choosing ignorance or knowledge and getting to good or poor outcomes.  That’s what lends the show a certain amount of moral ambiguity.

White Christmas, the first episode of Season 3 is more complex, wandering into moral luck territory via gaps between intentions and consequences. Gaps deliberately created by consciously chosen ignorance of the block-on-Facebook variety.

This is promising. Hopefully, the show will explore this more, because the straight-up value collisions are not that interesting. They are merely shocking corner-case hypotheticals of the torture-one-terrorist-to-save-humanity variety, in futurist garb. But with moral luck, you have more going on. Where knowledge is the default and ignorance must be consciously chosen, rather than the other way around, the consequences of ignorance becomes less defensible. Especially when you are in a position to choose ignorance for others.

Can't exist when the lies that make for civil society are punctured by technology? Grow up.

In the Black Cards interview, the husband Clancy Martin argues the opposite, specifically when it comes to love. Lie to your lover, and lie to yourself. Truth is the opposite of an aphrodisiac.

Amie: What should a woman do if she has cheated on her husband, whom she loves. She did it impulsively, and it didn’t mean anything. Should she tell her husband, or not?

Clancy: I don’t think she should tell her husband immediately. She might feel better briefly after telling him, but she’s giving him all of her guilt to carry around. And she certainly shouldn’t tell him in anger—as an attack during a fight, or as a response to some mistake he’s made.

Could there come a time when she should tell him? Yes, I think when she can see that the caring thing to do is to admit that this happened. Or if this starts to become a pattern, she’d better let him know that they need to see a therapist and then, in that moderated context, “come clean.” But she’s already done some harm with this one-night stand—don’t exacerbate it. 

Amie: Okay, but that’s what everyone says, and your thesis is that people in love have to lie more often than we admit. So shouldn’t you be coming down hard on the necessity of the lie? That a cheater should never tell? 

Clancy: Deny, deny, deny is the standard wisdom for men—and maybe for women too. That’s not—

Amie: For cheaters, let’s say.

Clancy: Yes, for cheaters, and this woman has cheated. But that’s not what I’m saying. I’m saying that caring should be her goal—and that caring might sometimes require carrying the burden of a lie for a while. Later, caring might require telling the truth. We have to be subtle epistemologists if—

Amie: Okay, okay.

Clancy: Can I just finish my sentence? We have to work hard to understand each other if we want to be good lovers.

Martin reverses the usual thinking on honesty; to him, it's a form of weakness to tell the truth.

I suspect a sort of Prisoner's Dilemma when it comes to relationships or marriages and truth. The optimal outcome is for both people in the relationship to select truth or lies (which of those you select depends on your philosophy), but the temptation is for one or the other person to defect to obtain the moral high ground at the cost of harmony in the relationship.

The interview contains a fascinating analysis of the Swedish movie Force Majeure which I saw at TIFF last year and found to be amusing in an acerbic and, well, Swedish way.

Amie: That reminds me of the movie we saw the other night, Force Majeure. A family on a ski trip is hit by a controlled avalanche. The smoke from the avalanche pours over their table at lunch on the mountaintop. But as it’s coming, the smoke looks like snow, and they think they are going to die. The mother wraps her arms around her children. The father picks up his gloves and his iPhone and runs. The two spend the rest of the movie dealing with the “truth” that has been revealed. And it seems manifestly true: the man is a coward and the woman has seen clearly. When friends try to encourage her to see it differently, suggesting for example that they are all okay, and that maybe they should move on, she is intractable. In the final scene, they are on a bus going back down the hillside, and the driver is taking sharp turns and having trouble with the gears. She forces him to stop so she can get off, and everyone on the bus follows her. But then on the roadside, night falls, and thirty people are on foot in the middle of nowhere, with nowhere to go. For the first time in the movie, it is manifest that this woman does not know what to do. That she has been alarmist. That she has caused a ruckus over nothing. It was a movie that presented two equally valid “truths.” And showed the way the self-righteous adhesion to one truth could tear apart a good marriage.

Clancy: For me the question becomes: When we learn things about our loved ones that cause us to dislike—or even to hate—those loved ones, what should we do? It will vary from case to case, which matters: there shouldn’t be one simple answer to the toughest questions about relationships. One friend says in Force Majeure, when the married couple has left the room, “They need therapy!” People always say, “Go to therapy!” We have become very simpleminded in how we think about love, and yet it matters to us more than anything. But here’s my answer: the woman in the movie thought she was seeing the naked truth. She even had it on video. But I would ask her, “Are you being as tough on yourself as you are on your partner? Can you withstand the same withering scrutiny? Look at your own motivations: Do you admire your motivations?” It’s a very good case study, because this woman in the movie, like many of us, was completely blind to her own failings. Forgiveness, care, commitment: that’s what we demand from our parents, what I hope we offer to our children, and I think ought to give to our spouses.

Once the avalanche occurs, the movie is a bit on the nose for a good long period. It's funny, but it's blunt, and it beats the same punch line with a hammer in scene after scene.

But then the movie ends with that scene on the bus driving down the mountain, the most oblique and intriguing part of the film. It's no coincidence that the one woman who stays on the bus is the same woman who had spoken openly about the many extra-marital affairs she's had. She rides the bus down the mountain uneventfully while all the others in the bus walk down the mountain, resigned by their (bourgeois) caution to a suboptimal outcome.

If the entire movie had that concluding scene's sly, understated sense of mystery, that would have been something.

From 2001: A Space Odyssey to Alien to...what?

Lovely piece by Jason Resnikoff on how his father's viewings of 2001: A Space Odyssey in 1968 and Alien in 1979 reflected his hopes and dreams for technology and humanity.

Science fiction is a Rorschach test of our collective forward-looking sentiment, and as Resnikoff's father was a computer scientist working at Columbia's Computer Center when 2001: A Space Odyssey premiered, science fiction resonated for him as an almost personalized scripture.

2001 is the brainchild of Stanley Kubrick and Arthur C. Clarke, who intended the film as a vision of things that seemed destined to come. In large part this fact has been lost on more recent generations of viewers who regard the movie as almost entirely metaphorical. Not so. The film was supposed to describe events that were really about to happen—that’s why Kubrick and Clarke went to such lengths to make it realistic, dedicating months to researching the ins and outs of manned spaceflight. They were so successful that a report written in 2005 from NASA’s Scientific and Technical Information Program Office argues that 2001 is today still “perhaps the most thoroughly and accurately researched film in screen history with respect to aerospace engineering.” Kubrick shows the audience exactly how artificial gravity could be maintained in the endless free-fall of outer space; how long a message would take to reach Jupiter; how people would eat pureed carrots through a straw; how people would poop in zero G. Curious about extraterrestrial life, Kubrick consulted Carl Sagan (evidently an expert) and made changes to the script accordingly.

It’s especially ironic because anyone who sees the film today will be taken aback by how unrealistic it is. The U.S. is not waging the Cold War in outer space. We have no moon colonies, and our supercomputers are not nearly as super as the murderous HAL. Pan Am does not offer commercial flights into high-Earth orbit, not least because Pan-Am is no more. Based on the rate of inflation, a video-payphone call to a space station should, in theory, cost far more than $1.70, but that wouldn’t apply when the payphone is a thing of the past. More important, everything in 2001 looks new. From heavy capital to form-fitting turtlenecks—thank goodness, not the mass fashion phenomenon the film anticipated—it all looks like it was made yesterday. But despite all of that, when you see the movie today you see how 1968 wasn’t just about social and political reform; people thought they were about to evolve, to become something wholly new, a revolution at the deepest level of a person’s essence.

Over one decade later, he watched Alien for the first time.

Consider Mother, the semi-intelligent computer system on board the Nostromo. Unlike HAL, who has complete knowledge of every aspect of his ship, Mother is perfectly isolated in a compartmentalized white room, complete with shimmering lights and padded walls. Whereas the Discovery makes an elegant economy of interior decoration with limited cabin space—it was a set where Kubrick allowed no shadows to fall—the Nostromo is meant to look like a derelict factory from the rust belt. My father thought the onboard computers looked especially rude for 1979, as though humanity’s venture into space would be done not with the technology of the future but the recent past. There’s a certain irony in this now: the flight computer used in the Space Shuttle, the IBM AP-101, effectively had only about one megabyte of RAM, which is more or less 1 percent of the computing power of an Xbox 360, but because of its reliability, NASA kept using it, with infrequent upgrades, into the 2000s.

The makers of Alien called this aesthetic-of-the-derelict “truckers in space,” which is fun but fails to capture the postindustrial criticism embodied in the Nostromo. Within the ship—a floating platform without a discernible bow or stern, akin to an oil rig—there are enormous spaces that look more like blast furnaces gone cold than the inside of a spaceship: a place of rusted metal, loose chains, forgotten pieces of machinery, of water falling from the ceiling and dripping to the floor to collect in stagnant pools. The ship’s crew bicker over pay and overtime; they follow company orders only begrudgingly. They are a very different, far more diverse group than the clearly white-collar crew of the Discovery. Inside the Nostromo, the threat does not come in the shape of a super-rational computer, a Pinocchio who wants to be a real boy. Instead, the danger is a wild animal lurking in the shadows, one that is unimaginably vicious. “The perfect organism,” Ash, the science officer, calls it, because it can survive anything. This? You ask yourself. This is evolution brought to perfection? A demon from Hell who is essentially indestructible, with acid for blood and two separate rows of fangs? What happened to the space baby? But there is a sick logic in calling the alien perfect. It has an unimpeachable record of wins to losses, and when all the world has become a contest, winners with perfect records are perfect.

And where, in all of this, is Mother? If the alien were set loose on HAL’s watch, he would probably neutralize it all on his own, automatically, as it were. Mother, on the other hand, spends the whole movie like a fated southern belle hooked on laudanum, locked in her room. She can’t even advise on how to defeat the monster. The computer cannot help. No costly investment in heavy capital will keep nature at bay. This was a lesson people were learning in 1979, by way of pink slips and foreclosures and sad car rides down the main drags of shuttered, lonely ghost towns where once factories had stood with thriving communities around them.

Fast forward over thirty years into the future, and neither vision seems entirely accurate. While an individual computer is vastly more powerful than the ones in the 70's, it's their ubiquity, portability, and ever-connected nature that is reshaping the world.

Meanwhile, space travel is perhaps not as far along as imagined in the movies, but efforts from private-sector billionaires like Elon Musk and Jeff Bezos and others have sustained (renewed?) interest and research in space travel and exploration (and, if you want to extrapolate that line out one more interval, to colonization).

Off the top of my head, I can't think of a sci-fi movie that best reflects humanity's current relationship to computers, the internet, and outer space. Even if you drop space exploration from the requirements, I'm not sure which movie I'd put in such a lineage. I liked The Social Network but it is a movie preoccupied with the personal relationships of the founders and less with the technology itself.

Interstellar is the most obvious recent choice as it has an Earth that is failing (perhaps our most popular dystopian nightmare), space travel driven by the private sector, and a candy-bar shaped computer robot named TARS who helps out our protagonists (human-computer cooperation). However, It glosses over the chasm between where we are today in the midst of the third industrial revolution and a time when AI enables robot companions like TARS, one we can interact with via natural voice commands. Its concluding message about love as a mechanism to transcend space and time is also an abrupt deus ex machina plot twist that just leaves behind what is, until then, a very pro-science plot.

I'd probably select The Matrix over either of those, and I'd throw Her and Wall-E in the mix, but none of those feel exactly right. All of those capture what will continue to be an increasing emphasis on living in a virtual or digital world of information in place of the physical world. The Matrix captures some of the super intelligent AI fears that have gained traction in the past year. Her has a very different take on the complexities that we'll confront when we first achieve a convincingly human AI. Most notably, the economics of companionship and love change if it becomes an abundant rather than a scarce good through digitization, but how much do we value such love, companionship, and sex because of its scarcity?

Having mentioned Interstellar earlier, it's worth mentioning Inception as well. Instead of exploring outer space, it explores what might happen if we make great leaps in venturing into inner space instead. Human consciousness becomes the frontier. Like many other sci-fi movies, though, Inception is quite attached to the physical world. Cobb and his team break into Robert Fischer's mind, but only in the hopes of breaking up a company in the physical or “real” world. Cobb's wife makes a fatal error when she chooses the virtual world over the “real” world (or confuses the two, the consequences are the same), and Cobb's redemption arc depends on his rejection of the now virtual ghost of his wife to return to his children in the real world, like Orpheus journeying out of Hades where he'd gone to retrieve his dead wife Eurydice. The still spinning top at the end of the movie leaves the audience in suspense over whether Cobb is indeed reuniting with his kids in the physical world, but most audiences read the ending as  endorsing the real world as higher value, otherwise the reunion at the end would feel false in some way.

I'm still waiting for the movie that takes the assumption of the development of virtual reality to its logical conclusion: the complete abandonment of physical reality. Almost all sci-fi movies default to the primacy of the physical world and its concerns, perhaps because that feels like the most humanist position. This is the knee-jerk vantage point of public reception of technology: deep misgivings when it conflicts with the concerns of the flesh. People who spend time with their faces buried in their cell phones are seen as rude, socially inept, uncivilized, and, at some fundamental level, inhumane (inhuman?).

But what if those people are just opting out of the numerous inconveniences and shadow prices of the physical world and choosing to engage with the most efficient delivery mechanism for mental stimulation? The issues seem particularly timely given all the activity around virtual and augmented reality. The technology is still relatively crude and a ways off from achieving what we generally mean by “reality”—that is, a simulation that instills absolute belief in the mind of the observer—but it seems within the time horizon ripe for exploration by sci-fi (call it 25 to 50 years).

I'm not arguing that virtual or augmented reality are superior to real life, but stigmatizing the technology by default means we won't explore the dark side of the technology with any rigor. It's the main issue I had with Black Mirror, the acclaimed TV anthology about the dark side of technology. The show is clever, and the writers clearly understand technology with a depth which supports more involved plotlines. However, the show, like many technology critiques, only travels at envisioning the first and often most obvious downside scenarios, as in the third episode of the first season, generally the most acclaimed and beloved of the episodes produced to date. (It's not my favorite, that scenario has been recounted again and again when it comes to total recall technology, and so I was dismayed it's the one that is being picked up to be made into an American feature film.

Far more difficult, but by extension much more interesting, would be to explore the next level, how humans might evolve to cope with these obvious problems. I'm not a fan of avant-garde that defines itself solely by what it's against, rather than what it's for. Venkatesh Rao dissects the show in one of the better critiques of the program:

According to the show’s logic, all choices created by technology are by definition degrading ones, and we only get to choose how exactly we will degrade ourselves (or more precisely, which of our existing, but cosmetically cloaked degradations we will stop being in denial about).

This is where, despite a pretty solid concept and excellent production, the show ultimately fails to deliver. Because it is equally possible to view seeming “degradation” of the priceless aspects of being human as increasing ability to give up anthropocentric conceits and grow the hell up.

This is why the choice to do a humorless show is significant, given the theme. Technology motivated humor begins with human “baseness” as a given and humans being worthwhile anyway. The goal of such humor becomes chipping away at anthropocentrism, in the form of our varied pretended dignities (the exception is identity humor, which I dislike).

It's problematic that as soon as you understand the premise and wayward trajectory of each episode, you know it's going to just drive itself off that cliff. Going one level deeper, from stasis to problem and then to solution, would likely take longer to film each episode, and perhaps that was a harder sell for network television. It might not be a coincidence that the Christmas special with Jon Hamm was longer than the in-season episodes and was the strongest installment to date.

Coincidentally, given my discussion of the need for a sci-fi movie that examines the implications of virtual reality, [MILD SPOILER ALERT ABOUT THE XMAS SPECIAL] the Christmas special focuses on literal de-corporalization and its impact. The tip of a thread of something profound about humanity peeks out there, I hope some of our sci-fi writers and directors tug on it.

Bayes's Theorem

This is from 2012 but is still a great overview of Bayes's Theorem which really doesn't age.

Bayes’s theorem wasn’t actually formulated by Thomas Bayes. Instead it was developed by the French mathematician and astronomer Pierre-Simon Laplace. 

Laplace believed in scientific determinism — given the location of every particle in the universe and enough computing power we could predict the universe perfectly. However it was the disconnect between the perfection of nature and our human imperfections in measuring and understanding it that led to Laplace’s involvement in a theory based on probabilism.

Laplace was frustrated at the time by astronomical observations that appeared to show anomalies in the orbits of Jupiter and Saturn — they seemed to predict that Jupiter would crash into the sun while Saturn would drift off into outer space. These prediction were, of course, quite wrong and Laplace devoted much of his life to developing much more accurate measurements of these planets’ orbits. The improvements that Laplace made relied on probabilistic inferences in lieu of exacting measurements, since instruments like the telescope were still very crude at the time. Laplace came to view probability as a waypoint between ignorance and knowledge. It seemed obvious to him that a more thorough understanding of probability was essential to scientific progress.

The Bayesian approach to probability is simple: take the odds of something happening, and adjust for new information. This, of course, is most useful in the cases where you have strong prior knowledge. If your initial probability is off the Bayesian approach is much less helpful.

Includes a link to Eliezer Yudkowsky's intuitive explanation of the theorem and this Quora response to the question “What does it mean when a girl smiles at you every time she sees you?” which are both excellent.

A Bayesian approach to life is a sensible one, but the human mind isn't optimized to apply the theory accurately except at the broadest of levels (most people's intuition is way off when it comes to the mammogram example used in both the overview and the Yudkowsky piece linked above). This can be particularly problematic when it comes to our judgments of other people; we overweight new information without considering the prior odds. This is exacerbated by the internet, where we are prone to judge others on the select few pieces of content they choose to post for public consumption.