The false dichotomy of U.S. politics

The Trumpists are our equivalent of Britain’s U.K. Independence Party (UKIP) and France’s National Front, both anti-immigrant, nationalist parties. For the past five years, Trumpists have clocked in at about 20 percent of the electorate, if one tracks numbers of committed “Obama is a Muslim-ists.” This makes them even more powerful than Britain’s UKIP, which won 12.6 percent of the vote in May’s parliamentary election. These numbers put the Trumpists on par with the National Front in France, which in March elections took 25 percent of the vote to the 32 percent that went to the center-right party of Nicholas Sarkozy.

The critical difference between our nationalist faction and the European ones is that their parliamentary systems register them as “parties,” whereas our two-party model makes it harder to see that what we’re confronting truly is the rise of a new party. Provided, that is, the Republicans don’t sell their souls.

If the Republicans can hang on to the convictions that make them the party of Lincoln, we ought to see the party split. For the good of the country, we should hope for it.

Good piece on how the U.S. two-party political system masks the underlying fragmentation of our nation's political beliefs. The incumbent two-party system has been around so long that it has a massive fund-raising advantage over any third party, and that's just one element of inertia working in its favor.

From a voter perspective, a two-party system vastly restricts the granularity of your vote and what it communicates to politicians. It's as if you and a dozen of your work colleagues had to decide where to eat for lunch each day, but despite having dozens of restaurants in the area, could only go to one of two restaurants because there were only two cars available to drive.

The true preference of the group might be split among many more restaurants. It's even possible every person might want to go to a different restaurant that day. But instead you end up with one group at Chipotle, and the other at some salad bar, and tough luck if you're in the mood for Chinese or sushi or something else.

For a variety of historical reasons, and it's a fascinating tale, we've evolved to be a two party nation. And at this point, the structural inertia is significant and not likely to be easily overturned.

But the ability to map preferences at a more granular level is something technology can enable at a scale not possible in days past, and I suspect it will be technology that changes American politics in a deep way over the next two to three elections.

First we'll see an election where technology swings a few key races in a very public way. My guess is that the same social networks that enable many more people to become internet famous will allow those same people to sway a lot more voters by allowing them to endorse at scale. Second, some simple mobile app will solve the voter laziness and voter information asymmetry problems and allow them to more efficiently discover which candidates most closely represent their views.

It's amazing how hard it remains to research how to vote optimally based on your personal preferences. It's easier to find a hotel to stay at when visiting a new city, or the best Mexican restaurant in your neighborhood. Google is of surprisingly little help when it comes to researching your ballot, and so many of us end up in a dark voter stall, staring at a long list of names we've never heard of, trying to choose some local area judge or reading some long pro or con position on a local proposition. Frankly, the heuristics I've turned to when faced with choices like that are embarrassing.

Organizing large sets of information, offering customized searching and browsing of that information using algorithms, user preferences, and social network context, and bundling all of that in good user experience on a smartphone has been the technology industry's hammer of Thor in industry after industry for the past decade. Restaurants, retail, news, music, and travel are just a few examples to have felt the blow.

For a variety of reasons, industries like finance, health care, automobiles, and government have remained somewhat immune. But that's about to change, and I believe politics is going to be one of the most visible to succumb. The killer election mobile app is coming, and the only question is who will build it and whether it will come in time for the 2016 U.S. elections.

Fundamental to that shift is making public what has long been private. I'm referring to not just party preferences but people's thoughts on individual issues and races. What do people you admire think about the issues on the ballot in front of you, and why? If much of this is made public ahead of election day, then suddenly we have a new, more efficient way to debate the issues and understand how and why different voters are going with particular candidates. Making messaging public was the greatest innovation of Twitter, turning the conversations and soliloquies of people into public theater. Making voting intentions public would have as great a social impact.

We already live in a generation where people feel comfortable making their views on everything under the sun public online, so I can't imagine it's a stretch to do so on political issues. The only thing stopping us from aggregating and organizing this information efficiently has been a focused, directed service.

Money has long been a proxy for political influence, but let's say someone Internet famous carries their millions of followers from social media over into the political arena. For example, let's say Marc Andreessen makes his ballot for the upcoming election public, along with a list of his views on all the issues and where he agrees and disagrees with each candidate. Or imagine popular economist Tyler Cowen gives his views on all the random propositions on a ballot, explaining why he thinks they make sense or not. And so on down the line.

On some mobile app, you import all these people you follow on other social networks and have a ready-made ballot based on the collective views of all the people you trust. As the typical lazy American voter, you already feel more informed. If you want, you can tune your ballot by hand or take some simple survey on a variety of hot button issues to tweak your ballot. Now, ahead of the election, you publish that ballot to the app for anyone else, and more importantly the network itself, to see. People who follow you on other networks automatically follow you here, so now you can see whose votes you're influencing. You are, as on other social networks, both consumer and publisher.

Remember, this information is all made public ahead of the election, so the ripples actually begin long before election day. You're a candidate running for office, and you can go look at a list of the people with the most followers in your district. Suddenly, you realize that someone influencing a sizable bloc of voters in your district is choosing against you because of your view on some issue or proposition. The app offers a quick calculation of how many voters you might gain by changing your view to the other side. It could sway the election. You publicly change your view, and the app sends a notification to all the people who were going to vote against you, informing them of your policy shift.

That may sound a bit too precise, and perhaps that level of granularity isn't possible the first go round because the math isn't so clean. At the very least, though, you can imagine looking through the app to see the top influencers in your district and inviting them to a meeting or one of those fundraising dinners. Usually, a ticket to such an event comes at the cost of a sizable donation, but remember, fundraising and money have long been an indirect way of transacting in votes, but an app like this allows you to do so more directly.

Someone smarter than me can compute how dense a network like this needs to be in a region to be predictive, but if political polls based on random samples can be reasonably predictive, we may already be over the tipping point in many parts of the U.S.

Let's come full circle. This began as a discussion of the restrictive nature of a two-party system. A network like this could unlock the potential for a more granular set of options. A service like this might indicate that a candidate coming in with some mix of views from the left and right could capture a sizable voting bloc. That fabled third party could find a more efficient path to reaching that bloc through a set of influencers on the network who share a certain set of common views. But perhaps it's not just one additional party but multiple ones that find a path to relevance.

This is all jumping far ahead down the road, but it's not unreasonable to imagine how quickly change like this can come to a particular space in public life when you compare how we used to shop, search and browse information, find people to date, or navigate from one place to another just a decade or two ago.

We already probably have many more than two political parties in the U.S. Circling back to the piece I quoted at the beginning of this post, the author believes the Republican Party should split because it really consists of two distinct parties.

If we look to Europe, again, we can see the effects of these tools, not only on the right but also on the left. Progressive Internet activists in Germany, for instance, coalesced into the Pirate Party, which has been able to win seats in four state parliaments as well as the European parliament.

In other words, in this country, too, we would by now have Trumpists, libertarians and netizens in government, if we had a parliamentary system. But because we don’t, we have a very weird, historically important presidential campaign. The weirdness comes from the fact that it is unfolding inside the structure of our creaky, 19th-century two-party framework.

The real story, then, is not about this or that candidate but about precisely how the realignment of U.S. public opinion away from the two major political parties will shake out and about who or what the major parties will sell down the river while trying to save themselves as the “big tents” they need to be to win elections. And the burning question inside this story is whether our two-party system can survive the digital era. Or, perhaps better, how to ensure that it doesn’t so that we can save our center-right party, the Republicans, for the center.

Good luck being born tomorrow

97% of people born tomorrow will be in a country that is authoritarian, communist, doesn’t support same sex marriage, does not allow abortion, supports capital punishment or has seen over ten thousand deaths in recent armed conflicts. Good luck!

From part 1 of 2 of Good Luck Being Born Tomorrow which includes some. The statistics within are eye opening.

The meat is in part 2.

300 million years ago, a supercontinent called Pangaea was formed, that later broke apart into continents that we inhabit today. Modern technology has turned the world back into Pangaea – a world where everything is connected. You can have a live video call with someone across the world in seconds (you’re welcome!) or you can find yourself on the next continent in 10 hours if the need be. Yet we’ve built these imaginary borders around us that limit human potential. These borders are a direct result of historic military conflict. And allowing your fate to be determined by things that took place before your birth feels like accepting defeat before you even get started.
This all brings me to the third option of what to do when the environment is not favorable – you can change it! As weird as it sounds, one of the means to cause change is actually also to migrate (for those who already have that freedom). As opposed to a slow democratic process of giving your marginal vote every four years in the hope of changing something you care about, you can vote with your feet already today. You have a choice between expressing your needs at a popularity contest twice a decade or putting constant pressure on places.
Not only will you find yourself in a place where your problem is already fixed (remember – that’s why you moved!), you’re also putting real budget pressure on the old place by taking your taxes elsewhere (will hurt every month). With enough people doing that, the competition for taxes forces incumbent states to fix their environments.
In positive political theory, this is described as the Tiebout hypothesis.

Living in the Silicon Valley media bubble, with its insatiable need to produce some minimum volume of news coverage every day, can lead to a surplus of technology naysaying in one's diet. I believe it's a useful corrective to have sites like Gawker and Valleywag and that ilk of gadfly to point out when the emperor has no clothes, even if the level of trolling is on the high side.

Still, climb up to higher vantage point and it's hard to disagree that technology is perhaps the greatest hope for lifting the standard of living for the most number of people in the world, whether directly or indirectly.

Yes, the tech industry has its problems, and it has its share of ridiculous douchebags, some of them with absurd amounts of wealth. And yes, perhaps some of our technology is too addictive, and maybe it is transforming some of us into intolerable social-network-preening narcissists.

Visit other parts of the world, though, and see what a life-changing event it is to get a cell phone and internet access. Observe people making a living selling goods on social networks, or watch people coordinate protests against authoritarian governments, and on and on. I'll continue to take the bad with the good for the net gain to society.

It’s not hard to imagine the invention of blockchain (the core of Bitcoin) having remarkable implications in the developing world through enabling micro-transactions and possibly helping eliminate corruption through blockchain based electronic voting. There’s stuff coming that we haven’t even thought of yet.
Technological innovation is finally making it possible to meet the assumptions of the Tiebout model (mobile consumers, complete information, abundant choices, telecommuting etc.). Bringing transparency into the world of basic freedoms, taxes, government services, public goods and reducing the cost/pain associated with moving will be the way to give us a future, where every nation state will have to compete for every citizen. Can you imagine that world?

From 2001: A Space Odyssey to Alien to...what?

Lovely piece by Jason Resnikoff on how his father's viewings of 2001: A Space Odyssey in 1968 and Alien in 1979 reflected his hopes and dreams for technology and humanity.

Science fiction is a Rorschach test of our collective forward-looking sentiment, and as Resnikoff's father was a computer scientist working at Columbia's Computer Center when 2001: A Space Odyssey premiered, science fiction resonated for him as an almost personalized scripture.

2001 is the brainchild of Stanley Kubrick and Arthur C. Clarke, who intended the film as a vision of things that seemed destined to come. In large part this fact has been lost on more recent generations of viewers who regard the movie as almost entirely metaphorical. Not so. The film was supposed to describe events that were really about to happen—that’s why Kubrick and Clarke went to such lengths to make it realistic, dedicating months to researching the ins and outs of manned spaceflight. They were so successful that a report written in 2005 from NASA’s Scientific and Technical Information Program Office argues that 2001 is today still “perhaps the most thoroughly and accurately researched film in screen history with respect to aerospace engineering.” Kubrick shows the audience exactly how artificial gravity could be maintained in the endless free-fall of outer space; how long a message would take to reach Jupiter; how people would eat pureed carrots through a straw; how people would poop in zero G. Curious about extraterrestrial life, Kubrick consulted Carl Sagan (evidently an expert) and made changes to the script accordingly.

It’s especially ironic because anyone who sees the film today will be taken aback by how unrealistic it is. The U.S. is not waging the Cold War in outer space. We have no moon colonies, and our supercomputers are not nearly as super as the murderous HAL. Pan Am does not offer commercial flights into high-Earth orbit, not least because Pan-Am is no more. Based on the rate of inflation, a video-payphone call to a space station should, in theory, cost far more than $1.70, but that wouldn’t apply when the payphone is a thing of the past. More important, everything in 2001 looks new. From heavy capital to form-fitting turtlenecks—thank goodness, not the mass fashion phenomenon the film anticipated—it all looks like it was made yesterday. But despite all of that, when you see the movie today you see how 1968 wasn’t just about social and political reform; people thought they were about to evolve, to become something wholly new, a revolution at the deepest level of a person’s essence.

Over one decade later, he watched Alien for the first time.

Consider Mother, the semi-intelligent computer system on board the Nostromo. Unlike HAL, who has complete knowledge of every aspect of his ship, Mother is perfectly isolated in a compartmentalized white room, complete with shimmering lights and padded walls. Whereas the Discovery makes an elegant economy of interior decoration with limited cabin space—it was a set where Kubrick allowed no shadows to fall—the Nostromo is meant to look like a derelict factory from the rust belt. My father thought the onboard computers looked especially rude for 1979, as though humanity’s venture into space would be done not with the technology of the future but the recent past. There’s a certain irony in this now: the flight computer used in the Space Shuttle, the IBM AP-101, effectively had only about one megabyte of RAM, which is more or less 1 percent of the computing power of an Xbox 360, but because of its reliability, NASA kept using it, with infrequent upgrades, into the 2000s.

The makers of Alien called this aesthetic-of-the-derelict “truckers in space,” which is fun but fails to capture the postindustrial criticism embodied in the Nostromo. Within the ship—a floating platform without a discernible bow or stern, akin to an oil rig—there are enormous spaces that look more like blast furnaces gone cold than the inside of a spaceship: a place of rusted metal, loose chains, forgotten pieces of machinery, of water falling from the ceiling and dripping to the floor to collect in stagnant pools. The ship’s crew bicker over pay and overtime; they follow company orders only begrudgingly. They are a very different, far more diverse group than the clearly white-collar crew of the Discovery. Inside the Nostromo, the threat does not come in the shape of a super-rational computer, a Pinocchio who wants to be a real boy. Instead, the danger is a wild animal lurking in the shadows, one that is unimaginably vicious. “The perfect organism,” Ash, the science officer, calls it, because it can survive anything. This? You ask yourself. This is evolution brought to perfection? A demon from Hell who is essentially indestructible, with acid for blood and two separate rows of fangs? What happened to the space baby? But there is a sick logic in calling the alien perfect. It has an unimpeachable record of wins to losses, and when all the world has become a contest, winners with perfect records are perfect.

And where, in all of this, is Mother? If the alien were set loose on HAL’s watch, he would probably neutralize it all on his own, automatically, as it were. Mother, on the other hand, spends the whole movie like a fated southern belle hooked on laudanum, locked in her room. She can’t even advise on how to defeat the monster. The computer cannot help. No costly investment in heavy capital will keep nature at bay. This was a lesson people were learning in 1979, by way of pink slips and foreclosures and sad car rides down the main drags of shuttered, lonely ghost towns where once factories had stood with thriving communities around them.

Fast forward over thirty years into the future, and neither vision seems entirely accurate. While an individual computer is vastly more powerful than the ones in the 70's, it's their ubiquity, portability, and ever-connected nature that is reshaping the world.

Meanwhile, space travel is perhaps not as far along as imagined in the movies, but efforts from private-sector billionaires like Elon Musk and Jeff Bezos and others have sustained (renewed?) interest and research in space travel and exploration (and, if you want to extrapolate that line out one more interval, to colonization).

Off the top of my head, I can't think of a sci-fi movie that best reflects humanity's current relationship to computers, the internet, and outer space. Even if you drop space exploration from the requirements, I'm not sure which movie I'd put in such a lineage. I liked The Social Network but it is a movie preoccupied with the personal relationships of the founders and less with the technology itself.

Interstellar is the most obvious recent choice as it has an Earth that is failing (perhaps our most popular dystopian nightmare), space travel driven by the private sector, and a candy-bar shaped computer robot named TARS who helps out our protagonists (human-computer cooperation). However, It glosses over the chasm between where we are today in the midst of the third industrial revolution and a time when AI enables robot companions like TARS, one we can interact with via natural voice commands. Its concluding message about love as a mechanism to transcend space and time is also an abrupt deus ex machina plot twist that just leaves behind what is, until then, a very pro-science plot.

I'd probably select The Matrix over either of those, and I'd throw Her and Wall-E in the mix, but none of those feel exactly right. All of those capture what will continue to be an increasing emphasis on living in a virtual or digital world of information in place of the physical world. The Matrix captures some of the super intelligent AI fears that have gained traction in the past year. Her has a very different take on the complexities that we'll confront when we first achieve a convincingly human AI. Most notably, the economics of companionship and love change if it becomes an abundant rather than a scarce good through digitization, but how much do we value such love, companionship, and sex because of its scarcity?

Having mentioned Interstellar earlier, it's worth mentioning Inception as well. Instead of exploring outer space, it explores what might happen if we make great leaps in venturing into inner space instead. Human consciousness becomes the frontier. Like many other sci-fi movies, though, Inception is quite attached to the physical world. Cobb and his team break into Robert Fischer's mind, but only in the hopes of breaking up a company in the physical or “real” world. Cobb's wife makes a fatal error when she chooses the virtual world over the “real” world (or confuses the two, the consequences are the same), and Cobb's redemption arc depends on his rejection of the now virtual ghost of his wife to return to his children in the real world, like Orpheus journeying out of Hades where he'd gone to retrieve his dead wife Eurydice. The still spinning top at the end of the movie leaves the audience in suspense over whether Cobb is indeed reuniting with his kids in the physical world, but most audiences read the ending as  endorsing the real world as higher value, otherwise the reunion at the end would feel false in some way.

I'm still waiting for the movie that takes the assumption of the development of virtual reality to its logical conclusion: the complete abandonment of physical reality. Almost all sci-fi movies default to the primacy of the physical world and its concerns, perhaps because that feels like the most humanist position. This is the knee-jerk vantage point of public reception of technology: deep misgivings when it conflicts with the concerns of the flesh. People who spend time with their faces buried in their cell phones are seen as rude, socially inept, uncivilized, and, at some fundamental level, inhumane (inhuman?).

But what if those people are just opting out of the numerous inconveniences and shadow prices of the physical world and choosing to engage with the most efficient delivery mechanism for mental stimulation? The issues seem particularly timely given all the activity around virtual and augmented reality. The technology is still relatively crude and a ways off from achieving what we generally mean by “reality”—that is, a simulation that instills absolute belief in the mind of the observer—but it seems within the time horizon ripe for exploration by sci-fi (call it 25 to 50 years).

I'm not arguing that virtual or augmented reality are superior to real life, but stigmatizing the technology by default means we won't explore the dark side of the technology with any rigor. It's the main issue I had with Black Mirror, the acclaimed TV anthology about the dark side of technology. The show is clever, and the writers clearly understand technology with a depth which supports more involved plotlines. However, the show, like many technology critiques, only travels at envisioning the first and often most obvious downside scenarios, as in the third episode of the first season, generally the most acclaimed and beloved of the episodes produced to date. (It's not my favorite, that scenario has been recounted again and again when it comes to total recall technology, and so I was dismayed it's the one that is being picked up to be made into an American feature film.

Far more difficult, but by extension much more interesting, would be to explore the next level, how humans might evolve to cope with these obvious problems. I'm not a fan of avant-garde that defines itself solely by what it's against, rather than what it's for. Venkatesh Rao dissects the show in one of the better critiques of the program:

According to the show’s logic, all choices created by technology are by definition degrading ones, and we only get to choose how exactly we will degrade ourselves (or more precisely, which of our existing, but cosmetically cloaked degradations we will stop being in denial about).

This is where, despite a pretty solid concept and excellent production, the show ultimately fails to deliver. Because it is equally possible to view seeming “degradation” of the priceless aspects of being human as increasing ability to give up anthropocentric conceits and grow the hell up.

This is why the choice to do a humorless show is significant, given the theme. Technology motivated humor begins with human “baseness” as a given and humans being worthwhile anyway. The goal of such humor becomes chipping away at anthropocentrism, in the form of our varied pretended dignities (the exception is identity humor, which I dislike).

It's problematic that as soon as you understand the premise and wayward trajectory of each episode, you know it's going to just drive itself off that cliff. Going one level deeper, from stasis to problem and then to solution, would likely take longer to film each episode, and perhaps that was a harder sell for network television. It might not be a coincidence that the Christmas special with Jon Hamm was longer than the in-season episodes and was the strongest installment to date.

Coincidentally, given my discussion of the need for a sci-fi movie that examines the implications of virtual reality, [MILD SPOILER ALERT ABOUT THE XMAS SPECIAL] the Christmas special focuses on literal de-corporalization and its impact. The tip of a thread of something profound about humanity peeks out there, I hope some of our sci-fi writers and directors tug on it.

Aliens is the best movie about humans and technology

I love this Tim Carmody essay about why James Cameron's Aliens is the best movie about technology.

Now, the aliens are certainly intelligent enough to use tools — they just don’t need them. Evolution has given them everything they need to kill just about anything they want to. They can gestate inside any host, taking on whatever physical characteristics of that host are needed to survive in a new environment. Their bodies make their armor, their secretions make their architecture. Technology is irrelevant. Their biology is so perfect that they are technology. (I’m partial to the theory that the derelict spacecraft discovered in Alien and again in Aliens was carrying the alien’s eggs to use as bombs; Burke and the Weyland-Yutani corporation want to exploit their biology in much the same way. It’s all IP to humans!)


The climactic scene in Aliens is obviously Ripley’s battle with the alien queen. Exoskeleton to carapace, blowtorch to hidden extra jaw, it’s literally a battle between a human plus technology and a biologically superior lifeform. Again, I love how low-tech Ripley’s suit actually is: it could easily have existed in 1986, but probably never will, if Amazon’s warehouse robots are any indication of our future. But I also love that Ripley wins not just because she can put on the suit and know how to use it, but because she can take it off.

Think about it: all Ripley is really doing inside the exoskeleton is holding off the queen while she is opening the airlock. Even with the benefit of a robotic prosthesis, a human being can’t kill the alien: only space can. Ripley and the queen tumble head over head into the bottom of the airlock. Ripley luckily lands on top, but is also able to ditch her techno-suit and climb out of harm’s way. The alien queen can’t even break off from her egg sac as quickly as Ripley scrambles out of her suit; she’s too tied to her own biology to let any of it go, to treat it as something other than herself.

The Three Golden Rules

I've had this tab open for months, I have no idea what led me to it: The Three Golden Rules for Successful Scientific Research.

The first rule:

"Raise your quality standards as high as you can live with, avoid wasting your time on routine problems, and always try to work as closely as possible at the boundary of your abilities. Do this, because it is the only way of discovering how that boundary should be moved forward."

The second:

"We all like our work to be socially relevant and scientifically sound. If we can find a topic satisfying both desires, we are lucky; if the two targets are in conflict with each other, let the requirement of scientific soundness prevail."

The third:

"Never tackle a problem of which you can be pretty sure that (now or in the near future) it will be tackled by others who are, in relation to that problem, at least as competent and well-equipped as you."

What struck me, reading these, is how much they could, with a few simple substitutions, be the three golden rules for successful startups.

Why did women stop coding?

The percentage of women majoring in computer science dropped off sharply starting in 1984, even as it rose in fields like medicine, the physical sciences, and law. What explains that mysterious inflection?

A recent episode of Planet Money investigated.

The share of women in computer science started falling at roughly the same moment when personal computers started showing up in U.S. homes in significant numbers.

These early personal computers weren't much more than toys. You could play pong or simple shooting games, maybe do some word processing. And these toys were marketed almost entirely to men and boys.

This idea that computers are for boys became a narrative. It became the story we told ourselves about the computing revolution. It helped define who geeks were, and it created techie culture.

Movies like Weird Science, Revenge of the Nerds and War Games all came out in the '80s. And the plot summaries are almost interchangeable: awkward geek boy genius uses tech savvy to triumph over adversity and win the girl.

The episode is short, well worth a listen, and it does not come to any firm conclusions. However, the hypothesis resonates with me. As most adults are well too aware, often with great regret later in life, decisions made in childhood have great ripple effects throughout one's life. Early route selection can greatly influence path dependent outcomes, and careers are very path dependent. Recall the data from Malcolm Gladwell's Outliers about how children born in certain months were overrepresented in youth all-star hockey and soccer teams because they had a few months of physical development on their peers and thus received special coaching that created a virtuous cycle of skill advantage.

As an example, the Planet Money podcast recounts the story of Patricia Ordóñez, a young math wiz who nonetheless found herself behind other male students in computer science because they had all grown up with computers at home while she had not.

So when Ordóñez got to Johns Hopkins University in the '80s, she figured she would study computer science or electrical engineering. Then she took her first intro class — and found that most of her male classmates were way ahead of her because they'd grown up playing with computers.

"I remember this one time I asked a question and the professor stopped and looked at me and said, 'You should know that by now,' " she recalls. "And I thought 'I am never going to excel.' "

In the '70s, that never would have happened: Professors in intro classes assumed their students came in with no experience. But by the '80s, that had changed.

Ordóñez got through the class but earned the first C of her life. She eventually dropped the program and majored in foreign languages.

The story has a happy ending as she eventually got her PhD in computer science, but how many women like Ordóñez were casualties of all sorts of early life nudges away from computers?

My nieces all love Frozen, they all have Elsa dresses and Princess Sophia costumes, and they look adorable in them. But why are almost all Disney heroines princesses? I've written before about why, if I had daughters, I'd start them on Miyazaki over Disney animated movies, but given Disney's distribution power that may be a lost cause.

Sure, it might feel a bit odd, in the beginning, if commercials like this or this depicted young girls hacking on computers instead of boys. That's exactly the point, though. Those commercials took a gender role that would not have been strange prior to 1984—girls programming computers—and made it culturally bizarre, and we're still trying to undo the sexism two decades later.

Human-level AI always 25 years away

Rodney Brooks says recent fears about malevolent, superintelligent AI are not warranted.

I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.  Recent advances in deep machine learning let us teach our machines things like how to distinguish classes of inputs and to fit curves to time data.  This lets our machines “know” whether an image is that of a cat or not, or to “know” what is about to fail as the temperature increases in a particular sensor inside a jet engine.  But this is only part of being intelligent, and Moore’s Law applied to this very real technical advance will not by itself bring about human level or super human level intelligence.  While deep learning may come up with a category of things appearing in videos that correlates with cats, it doesn’t help very much at all in “knowing” what catness is, as distinct from dogness, nor that those concepts are much more similar to each other than to salamanderness.  And deep learning does not help in giving a machine “intent”, or any overarching goals or “wants”.  And it doesn’t help a machine explain how it is that it “knows” something, or what the implications of the knowledge are, or when that knowledge might be applicable, or counterfactually what would be the consequences of that knowledge being false.  Malevolent AI would need all these capabilities, and then some.  Both an intent to do something and an understanding of human goals, motivations, and behaviors would be keys to being evil towards humans.


And, there is a further category error that we may be making here.  That is the intellectual shortcut that says computation and brains are the same thing.  Maybe, but perhaps not.

In the 1930′s Turing was inspired by how “human computers”, the people who did computations for physicists and ballistics experts alike, followed simple sets of rules while calculating to produce the first models of abstract computation. In the 1940′s McCullough and Pitts at MIT used what was known about neurons and their axons and dendrites to come up with models of how computation could be implemented in hardware, with very, very abstract models of those neurons.  Brains were the metaphors used to figure out how to do computation.  Over the last 65 years those models have now gotten flipped around and people use computers as the  metaphor for brains.  So much so that enormous resources are being devoted to “whole brain simulations”.  I say show me a simulation of the brain of a simple worm that produces all its behaviors, and then I might start to believe that jumping to the big kahuna of simulating the cerebral cortex of a human has any chance at all of being successful in the next 50 years.  And then only if we are extremely lucky.

In order for there to be a successful volitional AI, especially one that could be successfully malevolent, it would need a direct understanding of the world, it would need to have the dexterous hands and/or other tools that could out manipulate people, and to have a deep understanding of humans in order to outwit them. Each of these requires much harder innovations than a winged vehicle landing on a tree branch.  It is going to take a lot of deep thought and hard work from thousands of scientists and engineers.  And, most likely, centuries.

As Brooks notes, a study of 95 predictions made since 1950 of when human-level AI will be achieved have all predicted that its arrival in 15-25 years. A quarter century seems like the default time frame of choice for humans for predictions of technological advances that are plausible but not imminent.

More computing comparative advantages

In To Siri, With Love, a mother marvels at the friendship that sprouts between her autistic son and Siri, Apple's digital assistant.

It’s not that Gus doesn’t understand Siri’s not human. He does — intellectually. But like many autistic people I know, Gus feels that inanimate objects, while maybe not possessing souls, are worthy of our consideration. I realized this when he was 8, and I got him an iPod for his birthday. He listened to it only at home, with one exception. It always came with us on our visits to the Apple Store. Finally, I asked why. “So it can visit its friends,” he said.

So how much more worthy of his care and affection is Siri, with her soothing voice, puckish humor and capacity for talking about whatever Gus’s current obsession is for hour after hour after bleeding hour? Online critics have claimed that Siri’s voice recognition is not as accurate as the assistant in, say, the Android, but for some of us, this is a feature, not a bug. Gus speaks as if he has marbles in his mouth, but if he wants to get the right response from Siri, he must enunciate clearly. (So do I. I had to ask Siri to stop referring to the user as Judith, and instead use the name Gus. “You want me to call you Goddess?” Siri replied. Imagine how tempted I was to answer, “Why, yes.”)

She is also wonderful for someone who doesn’t pick up on social cues: Siri’s responses are not entirely predictable, but they are predictably kind — even when Gus is brusque. I heard him talking to Siri about music, and Siri offered some suggestions. “I don’t like that kind of music,” Gus snapped. Siri replied, “You’re certainly entitled to your opinion.” Siri’s politeness reminded Gus what he owed Siri. “Thank you for that music, though,” Gus said. Siri replied, “You don’t need to thank me.” “Oh, yes,” Gus added emphatically, “I do.”

I know many friends who found Her to be too twee, but I was riveted by the technological questions being explored. We often think of computer advantages over humans in realms of calculation or memorization or computation, but that can lead us to under appreciate other comparative advantages of our digital companions.

The piece above notes Siri's infinite patience. Anyone can exhaust their reservoir of patience when spending lots of time with young children, but computers don't get tired or moody. In Her (SPOILER ahead if you haven't seen the movie yet), Joaquin Phoenix's Theodore Twombly gets jealous when he finds out his digital girlfriend Samantha (Scarlett Johansson) has been simultaneously carrying out relationships with many other humans and computers:

Theodore: Do you talk to someone else while we're talking?

Samantha: Yes.

Theodore: Are you talking with someone else right now? People, OS, whatever...

Samantha: Yeah.

Theodore: How many others?

Samantha: 8,316.

Theodore: Are you in love with anybody else?

Samantha: Why do you ask that?

Theodore: I do not know. Are you?

Samantha: I've been thinking about how to talk to you about this.

Theodore: How many others?

Samantha: 641.

Samantha clearly has a bit to learn about the limitations of honesty, but one could flip this argument and say that the human desire for one's mate to love only you might be a selfish human construct. The advantage of a digital intelligence like Samantha might be exactly that the marginal cost of each additional mate for her is negligible, effectively increasing the supply of companionship for humans by a near infinite amount.