Tower of Babel

It's been a long time since I've written, and I'm out of shape. Let's go long for this one, to make up for lost time.

This first real post of 2017 has to acknowledge the year that just concluded, which still lingers in the mind like an unwelcome houseguest who vomited on the carpet and is still passed out on the living room sofa the next morning

To begin, let's pull out two responses to the Edge annual question 2017 edition which asks "What scientific term or concept ought to be more widely known?"

The first response is from Eric Weinstein, who nominates Russell Conjugation, a term with which I'm indeed unfamiliar, and one which plays into my weakness for fascinating concepts with cryptic names.

Russell Conjugation (or “emotive conjugation”) is a presently obscure construction from linguistics, psychology and rhetoric which demonstrates how our rational minds are shielded from understanding the junior role factual information generally plays relative to empathy in our formation of opinions. I frequently suggest it as perhaps the most important idea with which almost no one seems to be familiar, as it showed me just how easily my opinions could be manipulated without any need to falsify facts.
The basic principle of Russell Conjugation is that the human mind is constantly looking ahead well beyond what is true or false to ask “What is the social consequence of accepting the facts as they are?”  While this line of thinking is obviously self-serving, we are descended from social creatures who could not safely form opinions around pure facts so much as around how those facts are presented to us by those we ape, trust or fear. Thus, as listeners and readers our minds generally mirror the emotional state of the source, while in our roles as authoritative narrators presenting the facts, we maintain an arsenal of language to subliminally instruct our listeners and readers on how we expect them to color their perceptions. Russell discussed this by putting three such presentations of a common underlying fact in the form in which a verb is typically conjugated:
I am firm. [Positive empathy]
You are obstinate. [Neutral to mildly negative empathy]
He/She/It is pigheaded.  [Very negative empathy]
In all three cases, Russell was describing people who did not readily change their minds. Yet by putting these descriptions so close together and without further factual information to separate the individual cases, we were forced to confront the fact that most of us feel positively towards the steadfast narrator and negatively towards the pigheaded fool, all without any basis in fact.

The next concept in the recipe is coalitional instincts, nominated by John Tooby. Forming coalitions was one of the skills that elevated homo sapiens to the top of the animal kingdom.

Every human—not excepting scientists—bears the whole stamp of the human condition. This includes evolved neural programs specialized for navigating the world of coalitions—teams, not groups. (Although the concept of coalitional instincts has emerged over recent decades, there is no mutually agreed term for this concept yet.) These programs enable us and induce us to form, maintain, join, support, recognize, defend, defect from, factionalize, exploit, resist, subordinate, distrust, dislike, oppose, and attack coalitions. Coalitions are sets of individuals interpreted by their members and/or by others as sharing a common abstract identity (including propensities to act as a unit, to defend joint interests, and to have shared mental states and other properties of a single human agent, such as status and prerogatives).  
Why do we see the world this way? Most species do not and cannot: Even those that have linear hierarchies do not: Among elephant seals, for example, an alpha can reproductively exclude other males, even though beta and gamma are physically capable of beating alpha—if only they could cognitively coordinate. The fitness payoff is enormous for solving the thorny array of cognitive and motivational computational problems inherent in acting in groups: Two can beat one, three can beat two, and so on, propelling an arms race of numbers, effective mobilization, coordination, and cohesion.

As with so many things in life, a source of strength is also the root of weakness. They can be a great people, Kal-El, but not if they keep ganging up on each other for no reason other than to feel the psychological comforts of being in an in-group.

This raises a problem for scientists: Coalition-mindedness makes everyone, including scientists, far stupider in coalitional collectivities than as individuals. Paradoxically, a political party united by supernatural beliefs can revise its beliefs about economics or climate without revisers being bad coalition members. But people whose coalitional membership is constituted by their shared adherence to “rational,” scientific propositions have a problem when—as is generally the case—new information arises which requires belief revision. To question or disagree with coalitional precepts, even for rational reasons, makes one a bad and immoral coalition member—at risk of losing job offers, her friends, and her cherished group identity. This freezes belief revision.  
Forming coalitions around scientific or factual questions is disastrous, because it pits our urge for scientific truth-seeking against the nearly insuperable human appetite to be a good coalition member. Once scientific propositions are moralized, the scientific process is wounded, often fatally.  No one is behaving either ethically or scientifically who does not make the best-case possible for rival theories with which one disagrees. 

You can see where I'm headed with all of this, especially if you're still staggering out of 2016 like a boxer regaining consciousness after being knocked out. "What happened?" you can read on every fighter's lips as they open their eyes and blink at the concerned faces looking down at them on the canvas. "What happened?!" asked a dazed populace after Election Day 2016.

Our next POTUS, whether wittingly or not, weaponized Russell Conjugation and our coalitional instincts and performed a jiu-jitsu toss on his opponents, leaving much of the country wondering whether winner-take-all elections for the Presidency are a good thing in a country so evenly divided.

It seems darkly appropriate, in this Internet age, that a troll is now occupying arguably the most powerful seat in the world. It's a miracle of the worst kind that someone can openly flout nearly every convention of human decency and civility, not to mention statecraft, and walk about untouched. Something's rotten in the state of Denmark alright, but no play is needed to catch the conscience of this king. It's somewhat of a miracle how every Tweet of his sifts the same factions apart; one side screams in disgust and disbelief, the other piles on with glee. Let's call that technique of tweeting the Trump Conjugation. Sad!

Many claim the internet creates filter bubbles, but I believe the mechanism by which the Internet amplifies tribalism doesn't work the way most people describe it. The common explanation is that we form networks with like-minded people and only hear the opinions of those who agree with us, reinforcing our narrow world views.

My belief is that the Internet has increased our exposure to diverse viewpoints, including those from oppositional tribes. I suspect everyone who uses the Internet regularly encounters more diverse opinions, in absolute terms, than prior to the rise of the Internet, and there is research (PDF) to support this. Our information diet is more diverse now, and as opposed to the age before social media or even the Internet itself, we're exposed to more opinions that both strongly confirm AND counter our beliefs.

This shouldn't be so surprising or counterintuitive. Instant access to lots of information is what the Internet excels at better than any medium in history.

In fact, ask many and they'll admit they yearn for the more peaceful age before they were made aware of how those in other tribes thought. The sliding scale of horror starts, relatively harmlessly, with a Facebook post from a friend you didn't realize was a staunch Republican, or an email forward from an uncle still subscribing to a casual racism acceptable in an earlier generation when his views hardened. Maybe it's a segment from Fox News which is, yes, only seen because it's excerpted on The Daily Show, but seen all the same. How can people think this way?

Move up one notch on the scale of horrors and you might find the vitriol in online comment threads attached to articles and op-eds, which one sometimes scans out of some misplaced optimism and which stuns you with the sudden violence of a drive-by-shooting. Pull on that thread of toxicity even further and you may end up encountering direct harassment on services like Twitter. In the pre-Internet age, I don't recall every having such fine-grained resolution on the opinions of the opposition. What many call a filter bubble might just be psychic defensive shields.

The pre-Internet age actually felt much more like a filter bubble, one in which we had a comforting if illusory feeling of kinship with our fellow citizens. Many signal their cosmopolitanism by decrying life in the filter bubble, but what few of those admit is that life outside the filter bubble is a brutal wasteland (minus the poetic language of a Cormac McCarthy, who might be the only one to stare into the abyss of 4chan and find in there a new bogeyman to rival Anton Chigurh for pure nihilism).

[Many disagree and still hold resolutely to the thesis that the internet has cocooned us in filter bubbles, and I'm open to that argument if supporters bring data to the table rather than just their own anecdotal impressions. If there was ever a year that should have made us all suspicious of our feelings about what was happening, 2016 was it. Anecdotal journalism is not inherently good or bad, and that is its fundamental weakness.]

What should terrify us, and what may be the real and deeper problem, is how we reacted to this explosion in diverse thought and information which the Internet unlocked. The Utopian dream was that we'd rethink our hypotheses in the face of all the new ideas and conduct rational debates like civilized adults. A more informed populace would be a wiser one.

Instead, we've regressed, forming teams and grabbing stones to hurl at each other like primates. Welcome to the jungle. 2016 felt like a Hobbesian anarchy of ideological warfare, and it's turned Twitter into a bar where drunken brawls break out every few minutes. It's a downward spiral of almost insufferable negativity from which Twitter may never recover, exacerbated by a 140 character limit of which one side effect is that even reasonable people sound smug. The English language is capable of nuance, but not often in 140 characters, another reason Twitter's absolute refusal to update that outdated rule is so short-sighted.

In contemplating 2016, I went back and reread the myth of the Tower of Babel. Here's the story from the King James edition (source: Wikipedia):

1. Now the whole world had one language and a common speech. As people moved eastward, they found a plain in Shinar and settled there.
2. They said to each other, “Come, let’s make bricks and bake them thoroughly.” They used brick instead of stone, and tar for mortar.
3 And they said, “Come, let us build ourselves a city, and a tower whose top is in the heavens; let us make a name for ourselves, lest we be scattered abroad over the face of the whole earth.”
4 But the Lord came down to see the city and the tower which the sons of men had built.
5 And the Lord said, “Indeed the people are one and they all have one language, and this is what they begin to do; now nothing that they propose to do will be withheld from them.
6 Come, let Us go down and there confuse their language, that they may not understand one another’s speech.”
7 So the Lord scattered them abroad from there over the face of all the earth, and they ceased building the city.
8 Therefore its name is called Babel, because there the Lord confused the language of all the earth; and from there the Lord scattered them abroad over the face of all the earth.
— Genesis 11:4–9[10]

So much in one short story, beginning with the recognition of the power of language to produce coordinated action, with which mankind would be capable of anything. As God phrased it, "nothing that they propose to do will be withheld from them." (it's not ironic but intentional that I pull this reference from Wikipedia, one of the modern examplars of coordinated human action). Language and money are among mankind's greatest creations, allowing for trust and coordinated action never possible previously.

Then The Tower of Babel story concludes with the division of mankind all over the Earth, a succinct metaphor for the rise of tribalism, with all its benefits and ills.

What's worth reconsidering is the causality outlined in the second half of the story. Babel (root of the word babble) claims that in giving humans different languages, he fractured humans into rival tribes incapable of coordinating with each other.

What if the causality is mistaken? What if, even when we share the same language, we cannot, will not, understand each other? That's what 2016 felt like. Russell Conjugation might be a design flaw of the English language. Even among all of us who speak English, even when we're watching the same data, whether it's video or text or economic charts, we can't seem to agree. If differences in language are what divide us, translation is a solution. But if even a common language can't overcome our tribal instincts or our mood affiliation, the solution is not as clear.

Speaking of mood affiliation, Steven Pinker, in an interview with Tyler Cowen, wondered:

I’m hoping that naming and shaming and arguments will give free speech a greater foothold in academia. The fact that academia is not the only arena in which debates are held, that we also have think tanks and we also have a press. We also have the Internet.
How we could set up the rules so that despite all of the quirks of human nature — such as intellectual tribalism — are overcome in our collective arena of discourses is, I think, an absolutely vital question, and I just don’t know the answer because we’re seeing at the same time — there was the hope 20 years ago that the Internet would break down the institutional barriers to the best ideas emerging.
It hasn’t worked out that way so far because we have the festering of conspiracy theories and all kinds of kooky beliefs that somehow the Internet has not driven out, but if anything has created space for. How we as a broader culture can tilt the rules or the norms of the expectations so that if you believe something that’s false, eventually you’ll be embarrassed about it, I wish I knew. But that’s obviously what we ought to be striving for.

I wish I knew, too.

Plenty of much smarter people fear Artificial Intelligence, I don't know where I fall on that debate. However, I'm firmly in the camp of hoping cars learn to drive themselves, that they'll far surpass humans in improving the safety of our roads. But if replacing human fallibility with AI is a path to social good in driving, why not elsewhere?

Humans are capable, at their peak, of being very rational thinkers. But what's concerning for the world is how rarely we operate at the limits of our potential, and in how many contexts we become irrational, or even complete idiots. AI can take many of the best aspects of human logic and scale them, make them reliable.

Some of the best CEO's I've encountered in my life, the Jeff Bezos and Mark Zuckerberg's of the world, are capable of being rational a much higher percentage of the time than the average person. They seem far less susceptible to the usual cognitive biases. When I say someone thinks like a computer, many interpret that as an insult, whereas I see it as a supreme compliment. This is why most middle managers may someday be replaced by AI.

Too much of our pop culture, especially Hollywood, venerates human emotion, despite its often crippling effect on our thinking. Nothing's wrong with that, but very few movies have the courage to follow rational thought to its extremes without softening it with some signs of humanity (read: frailty). The closest pop culture icons of rational thinking that leap to mind are Sherlock Holmes and House, the latter of which was based on Sherlock Holmes (Holmes --> Home --> House), and Spock of Star Trek. Watch enough movies about them, however, and there is always a moment where each of these characters learns about the virtues of love and humanity from Watson or Captain Kirk or another of the less rational around them.

Why pull the punch? If Spock had to confront the trolley problem, he should by all accounts save the lives of the five over the lives of the one, even if the one were his friend Kirk (I'm conveniently leaving out the option in which Spock takes Kirk's place on the train tracks, because Hollywood would, of course, choose that route). We should applaud that, but are human so we shudder.

Which leaves Us versus Them. The darker side to coordinated human action. Us versus them is so powerful a force that confronting it can feel demoralizing. It is everywhere. Even an artificial construct like sports can set people against each other in ways that incite violence. This leaves us a challenge, one which is writ small in corporate environments. How do you turn zero sum games into positive sum games? Because if you can't, perhaps we're doomed to duke it out for eternity.

Two minor pop culture spoiler alerts here, for those who haven't ready through The Dark Forest (book two of The Three Body Problem trilogy, which I just finished today and which has left me giddy to dive into the concluding book) and Watchmen, by Alan Moore and Dave Gibbons (without thinking too deeply, it's likely my favorite graphic novel of all time). If you haven't read those two, this post ends here for you, though if you've read the Watchmen and not The Dark Forest then perhaps you won't mind a minor spoiler.

The Dark Forest's entire story hinges on two axioms of cosmic sociology, described early in the book in a somewhat casual conversation:

“First: Survival is the primary need of civilization. Second: Civilization continuously grows and expands, but the total matter in the universe remains constant.”
To derive a basic picture of cosmic sociology from these two axioms, you need two other important concepts: chains of suspicion, and the technological explosion.

It's not often that a book gives you all the clues to decipher the plot up front, but one of this book's pleasures is that it does so in such an offhand way that when all becomes clear at the end, you revisit that earlier moment the way viewers scanned back through The Sixth Sense when the ring dropped to the floor at the end. It's relevant here because of the nature of zero sum games, and those with an interest in game theory will love The Dark Forest.

In Chapter XI of Watchmen (and here I'll say the spoiler is much larger, so if you haven't read it just stop here, do not pass Go, do not collect $200), Adrian Veidt reveals that at a similar moment of despair about human nature, he hatched a plan to harness the power of Us versus Them by turning all of the world into Us. To do so, he creates a fictional Them to unite the world, the famous Watchmen Monster.

It's one of the most mind-blowing mystery reveals in my fiction reading life, and I had a similar moment of delight at the end of The Dark Forest.

If you can't beat Us vs Them, just make a Them so daunting everyone joins Us (though taken out of context, yes, this looks less daunting than like a hippie Octopus tripping on LSD)

If you can't beat Us vs Them, just make a Them so daunting everyone joins Us (though taken out of context, yes, this looks less daunting than like a hippie Octopus tripping on LSD)

It may seem dire to turn to extreme science fiction plot twists for solutions to current predicament, but given the quality of the stories cited, perhaps they deserve credit for seemly remotely plausible. If or when humanity evolves past these fundamental flaws in our design, centuries down the road, it may seem from this vantage point here at the start of 2017 as much like science fiction as an iPhone might to a caveman.

Sneaky feminism

Feminism has been sneaking around. Don’t believe me? A recent New York profile of TV host Katie Nolan hailed the “woman bringing a sneaky feminism to Fox sports.” A few days later, the New York Times went long on Amy Schumer’s boisterous feminism, which it characterized as her “sneaky power.” Like Broad City (another purveyor of “sneak-attack feminism”), Schumer’s work is something of a trysting spot for furtive sisterhood; last year in Slate Willa Paskin declared Inside Amy Schumer the “most sneakily feminist show on TV.”
Psst! Do you know what else is “sneakily feminist?” Showtime’s The Affair. Meanwhile the Hugh Dancy and Maggie Gyllenhaal flick Hysteria is “slyly feminist,” as is Pixar’s fable Inside Out(which, according to a separate reviewon Slate, accomplishes a “subtle but surprisingly feminist” swerve). Plus, the show Trophy Wife has bloomed, like some nocturnal desert flower, into “secretly one of the most feminist shows on TV.” Sundance chose the “top ten secretly feminist films” of all time (with Thelma and Louise at the mist-shrouded apex). Spy is “secretly a feminist attack on the patriarchy.” Not even academic books prove immune from such subtlety, secrecy, surprise: In a chapter on Ursula Le Guin’s invented folklore, scholar Jarold Ramsey notes that the “slyly feminist … appropriation of the mystique of ‘Old Man Coyote’ can be illustrated by the beginning of a Kesh myth about a war between bears and humans.”
Let’s read that myth! Once upon a time, a lady Coyote tried to dissuade the King of the Bears from attacking humankind. “We should all live in peace and love each other,” the Coyote pleaded, and “all the while she was talking,” Le Guin writes, “Coyote was stealing Bear’s balls, cutting them off with an obsidian knife she had stolen from the Doctors Lodge, a knife so sharp he never felt it cutting.”

Katy Waldman on that verbal tic of an adjective that must precede the word feminist or feminism. Anyone referring to Amy Schumer as sneakily feminist must be an extreme feminist indeed.

Small talk

Small talk falls on the other end of the continuum; it is speech that prioritizes social function. Think of this exchange: "How's it going?" "Oh, pretty good." There's not zerosemantic content in there — presumably "pretty good" excludes "dying at this exact moment," so that's some information. But the primary function of those speech acts is social, not to say something but to do something, i.e., make contact, reaffirm shared membership in a common tribe (whatever it may be), express positive feelings (and thus lack of threat), show concern, and so forth. These are not unimportant things, not "small" at all, really, but they are different from communicating semantic content.
Small talk — particularly in its purest form, phatic communion — is a context in which language has an almost ritualistic quality. The communication of ideas or information is secondary, almost incidental; the speech is mainly meant to serve the purpose of social bonding. It asks and answers familiar questions, dwells of topics of reliable comity, and stresses fellow feeling rather than sources of disagreement.
This helps explain the ubiquity of sports in small talk, especially male small talk. Sporting events are a simulation of conflict with no serious consequences, yet they generate enormous amounts of specific information. They are a content generator for small talk, easing the work of communion.

David Roberts on why he finds small talk so excruciating.

There is friction anytime there is a mismatch between how two people use a communications medium (in this case, face-to-face conversation). It's strange to me when people use Twitter to post photos of their family, but that's largely because I use Facebook or Instagram for that.

My issue with small talk is its information scarcity. Brian Christian's brilliant book The Most Human Human helped me realize the critical role of conversational entropy in the human experience. Small talk rates very low on entropy, so not surprisingly, it's the type of conversation A.I. can most easily imitate.

Still, in the ebb and flow of a conversation, chasing after too much entropy or novelty, pursuing an unbroken line of odd or probing questions and thoughts, can be its own faux pas. Your conversation partner may feel they're being assaulted. Managing that delicate balance, knowing when to push, when to pull back, that is the art of social grace and charisma.

Universal sign language

“Decide” is what is known as a telic verb—that is, it represents an action with a definite end. By contrast, atelic verbs such as “negotiate” or “think” denote actions of indefinite duration. The distinction is an important one for philosophers and linguists. The divide between event and process, between the actual and the potential, harks back to the kinesis and energeia of Aristotle’s metaphysics.
One question is whether the ability to distinguish them is hard-wired into the human brain. Academics such as Noam Chomsky, a linguist at the Massachusetts Institute of Technology, believe that humans are born with a linguistic framework onto which a mother tongue is built. Elizabeth Spelke, a psychologist up the road at Harvard, has gone further, arguing that humans inherently have a broader “core knowledge” made up of various cognitive and computational capabilities. 
In 2003 Ronnie Wilbur, of Purdue University, in Indiana, noticed that the signs for telic verbs in American Sign Language tended to employ sharp decelerations or changes in hand shape at some invisible boundary, while signs for atelic words often involved repetitive motions and an absence of such a boundary. Dr Wilbur believes that sign languages make grammatical that which is available from the physics and geometry of the world. “Those are your resources to make a language,” she says. As such, she went on to suggest that the pattern could probably be found in other sign languages as well.
Work by Brent Strickland, of the Jean Nicod Institute, in France, and his colleagues, just published in the Proceedings of the National Academy of Sciences, now suggests that it is. Dr Strickland has gone some way to showing that signs arise from a kind of universal visual grammar that signers are working to.

Fascinating. Humans associate language with intelligence to such a strong degree, I predict the critical moment in animal rights will come when a chimp or other monkey takes the stand in an animal testing court case and uses sign language to give testimony on their own behalf.

Reading the test methodology employed in the piece, I wonder if any designers out there have done any similar studies with gestures or icons. I'm not arguing a Chomskyist position here; I doubt humans are born with some basic touchscreen gestures or base icon key in their brain's config file. This is more about second-order or learned intuition.

Or perhaps we'll achieve great voice or 3D gesture interfaces (e.g. Microsoft Kinect) before we ever settle on any standards around gestures on flat touchscreens. If you believe, like Chomsky, that humans have some language skills (both verbal and gestural) hard-wired in the brain at birth, the most human (humane? humanist?) of interfaces would be one that doesn't involve any abstractions on touchscreens but instead rely on the software we're born with.

That moment when we played with syntax

In That Way We're All Writing Now, Clive Thompson investigates the rise of a form of writing I'll refer to as the stand alone subordinate clause, often accompanied by an image, a sequential grid of images, an animated GIF, or a Vine.

One of the most popular forms is the “when you” subordinate clause:

Popular sub genres include “when your” and “when your ex,” but let your imagination roam and you will find yourself deep down many a rabbit hole.

Another popular form is “that moment when.” Here's one that functions also as a humblebrag, a more sophisticated instance of the form.

Searching Twitter for “that awkward moment when” is the new, low-fi form of America's Funniest Home Videos, which is ironic since the form defies any highbrow incarnations.

Thompson offers a couple explanations for the rise of this type of expression online.

1) It creates a little puzzle.
Grammatically speaking, what’s going on here is the rise of the “subordinate clause.” A subordinate clause isn’t a sentence on its own. As the name implies, it requires another sentence fragment to complete it, as with this example that McCulloch and I looked at on Yik Yak:
Usually you can quickly deduce what the missing part would be. Maybe it’s something like You, sadly, always know what to do when she’s holding a dog on her Tinder and you’re like, “cute dog.” Or maybe the full sentence that emerges in your head is more convoluted, like Nothing is more bittersweet than reflecting on the challenges of dating someone who is superficially attractive but owns a pomeranian and thus, you worry, has all sorts of dog/partner priority issues, which you can instantly intuit when you’re using a dating app and see someone when she’s holding a dog on her Tinder and you’re like, “cute dog.”
The point is, it’s up to you imagine the rest of the utterance. It’s like the author is handing you a little puzzle. Subordinate-clause tweets and Yik-Yak postings seduce us into filling out that missing info, McCulloch says. “Our brain has to work a little bit harder to figure out what it’s referring to, and so making that connection is very satisfying. It’s like getting a joke. You have to draw that connection for yourself a little bit — but because you can do it, it works really well.”
A historic parallel? The crazy, long chapter headings in 19th-century novels, which often were also dependent clauses, inviting the reader to imagine the rest of the baroque narrative. “In Which Our Protagonist Meets A Dashing Stranger,” McCulloch jokes. “The ‘in which’ is doing a very similar thing.”

The ordering of the best of these subordinate clauses is critical; the punch line needs to come at the end. Like a joke, the setup comes in the first part of the clause so the closing can knock the pins down with maximum effect.

I'm more partial to Thompson's final explanation, which isn't a reason as much as it is an observation of how much we tinker with syntax online.

What’s happening now is different. Now we’re messing around with syntax — the structure of sentences, the order in which the various parts go and how they relate to one another. This stuff people are doing with the subordinate clause, it’s pretty sophisticated, and oddly deep. We’re not just inventing catchy new words. We’re mucking around with what makes a sentence a sentence.
“Playing with syntax seems to be the broad meta trend behind a whole bunch of stuff that’s going on these days,” McCulloch tells me. And it goes beyond this subordinate-clause trend. Many of the biggest recent language memes were about syntax experimentation, such as the “i’ve lost the ability to can” gambit (which I wrote about a few months ago), or the gnarly elocution of doge, or the “because” meme. (Indeed, Zimmer points out, the American Dialect Society proclaimed “Because” the Word of the Year for 2013, largely because it had been revitalized by this syntax play.)
Why would we be suddenly messing around with syntax? It’s not clear. McCulloch thinks it may be related to a larger trend she’s identified, which she calls “stylized verbal incoherence mirroring emotional incoherence”. Most of these syntax-morphing memes consist of us trying to find clever new ways to express our feelings.

I consider these to be a formal type of internet expression just as haikus and sonnets are forms of poems. Just as genres of movies are constraints within which artists can focus their creativity, this form of social network post has its own formal conventions within which everyone can exercise their wit. As soon as the reader's eye spots the opening “that awkward moment” or sees a grid of images with giant text overlaid, their mind is primed for the punch line.


Does retweeting your own praise make you a monster? I know what Betteridge has to say about headlines, but if by monster you mean narcissist, then yes? And on the subject of narcissism, I find it rich that people take to Twitter to complain about the narcissism of selfie sticks and then you click through to their profile and they've tweeted something like 12,000 times. Okay.


A definitive history of fleek. Just when I was starting to work it into my language, it becomes passé. Damn you iHop.


The best rapper alive for every year since 1979 according to Complex. The first woman appears in the most recent year, 2014. Someone had to go make this list so everyone could post their outrage in the comment thread, and Complex was the one to accept this burden on behalf of humanity.


Is this the best board game on the planet? I have not played board games in years, but I'm going to get this and see if it rekindles my interest.


Maybe being physically cold actually does make you more susceptible to catching a cold. Grandma and mom were right, as usual.

The linguistics of menus

You needn’t be a linguist to note changes in the language of menus, but Stanford’s Dan Jurafsky has written a book doing just that. In The Language of Food: A Linguist Reads the Menu, Jurafsky describes how he and some colleagues analyzed a database of 6,500 restaurant menus describing 650,000 dishes from across the U.S. Among their findings: fancy restaurants, not surprisingly, use fancier—and longer—words than cheaper restaurants do (think accompaniments and decaffeinated coffee, not sides and decaf). Jurafsky writes that “every increase of one letter in the average length of words describing a dish is associated with an increase of 69 cents in the price of that dish.” Compared with inexpensive restaurants, the expensive ones are “three times less likely to talk about the diner’s choice” (your way, etc.) and “seven times more likely to talk about the chef’s choice.”

Lower-priced restaurants, meanwhile, rely on “linguistic fillers”: subjective words like deliciousflaky, and fluffy. These are the empty calories of menus, less indicative of flavor than of low prices. Cheaper establishments also use terms like ripe and fresh, which Jurafsky calls “status anxiety” words. Thomas Keller’s Per Se, after all, would never use fresh—that much is taken for granted—but Subway would. Per Se does, however, engage in the trendy habit of adding provenance to descriptions of ingredients (Island Creek oysters, Frog Hollow’s peaches). According to Jurafsky, very expensive restaurants “mention the origins of the food more than 15 times as often as inexpensive restaurants.”

More here.

To charge what fine dining establishments charge, it's not enough to source the best ingredients, even though anyone who's peeked at the ledger for a high end restaurant knows the ingredients are a massive cost. Diners expect an experience now, and using more ornate language to describe a dish and the provenance of its ingredients is just as important as beautiful plating and presentation in leaving diners satisfied after having left them several pounds heavier but several hundred dollars lighter.

Narrative framing in restaurant reviews

Researchers analyzed over 900,000 online restaurant reviews to understand how people structured positive and negative reviews from a narrative perspective.

Negative reviews, especially in expensive restaurants, were more likely to use features previously associated with narratives of trauma: negative emotional vocabulary, a focus on the past actions of third person actors such as waiters, and increased use of references to “we” and “us”, suggesting that negative reviews function as a means of coping with service–related trauma. Positive reviews also employed framings contextualized by expense: inexpensive restaurant reviews use the language of addiction to frame the reviewer as craving fatty or starchy foods. Positive reviews of expensive restaurants were long narratives using long words emphasizing the reviewer’s linguistic capital and also focusing on sensory pleasure. Our results demonstrate that portraying the self, whether as well–educated, as a victim, or even as addicted to chocolate, is a key function of reviews and suggests the important role of online reviews in exploring social psychological variables.

Anyone who's spent some time reading restaurant reviews on Yelp will feel a pang of recognition. Perhaps taking a restaurant to task after a poor service experience is a cathartic way of dealing with the trauma, explaining why someone might take the time to write yet another review when a restaurant on Yelp already has hundreds or thousands of reviews.

In summary, one–star reviews were overwhelmingly focused on narrating experiences of trauma rather than discussing food, both portraying the author as a victim and using first person plural to express solace in community.

The narrative style of the hyperbolic negative restaurant review, with its first person framing, has made them well-suited to serving as dramatic monologues.

On positive reviews of inexpensive restaurants:

Whether there is in fact a biochemical link between junk food cravings and drug addiction is an open question in the literature [4]. Nonetheless, our results suggest that the folk model of this belief is productive and widespread in consumer reviews. Hormes and Rozin (2010) found that participants rated the words “craving” and “addiction” in various languages as being most appropriately applied to drugs, alcohol, or food. Our study extends these results to show that the metaphor of food as an addiction or craving tends to apply to a particular subset of foods. The foods that are “craved” are foods that are in some way non–normative: they are meaty, sugary, starchy foods, generally fast food and street food, or small snack–like inexpensive ethnic foods. Craved foods aren’t vegetables, or main courses like meatloaf or fish or even side dishes like mashed potatoes. The folk model of what we crave or are addicted to encompasses foods that are somehow considered inappropriate for a meal, bad for you (unhealthily full of fats and sugars), inexpensive, comfort food that we feel guilty for having but eat anyhow.

The result that women are more likely to use this metaphor in our data is also consistent with previous results. Rozin, et al. (1991) found that females are significantly more likely to express cravings for chocolate than males. Zellner, et al. (1999), Weingarten and Elston (1990), and Osman and Sobal (2006) found that female undergraduates were more likely than males to report food cravings. Our results do not distinguish among the possible causes of the greater number of these expressions by female reviewers: women might be more likely than men to have these cravings or feelings, women might be more comfortable than men to admitting to these cravings, or women might simply be more likely than men to use this particular linguistic metaphor to describe their otherwise identical desires.

This makes intuitive sense. Most people are more likely to eat more affordable food repeatedly and thus describe them as an addictive substance. I wonder if the prevalent discussion of the health risks of of meaty, starchy foods contributes to the language of addiction when describing them; guilt and addiction are so entwined.

Positive reviews of expensive restaurants use a different narrative framing.

We first examined review features linked with educational capital. Education is strongly associated with differences in socioeconomic status, and in fact is one of the main ways that class status is defined in social scientific studies, along with work and income. Previous work on food advertising found that advertising of more expensive products employs longer, more complex words and longer sentences (Freedman and Jurafsky, 2011), presumably because complex words or sentences signal the writers’ higher educational capital, and hence project higher social status. We therefore tested whether this use of more complex language to project “linguistic capital” was similarly associated with price in reviews, predicting that reviews more expensive restaurants would be longer and use longer words.

The second feature we investigate frames food as a sensual or even sexual pleasure. This tendency is widespread in expensive wine reviews, which make extensive use of phrases like sexy, sensual, seductive, voluptuously textured, ravishing, and hedonistic (Lehrer, 2009; McCoy, 2005; Shesgreen, 2003). Television food commercials in the United States also emphasize “sensual hedonism” with words like luscious, indulgent, irresistible, and decadent (Strauss, 2005). We therefore expected reviews of expensive restaurants to use words related to sex or sensuality.

The data confirmed their hypotheses.

I wonder how much of this narrative framing results from some level of confirmation bias. If you spend so much on a restaurant meal, you're going to feel great pressure to justify your decision, and describing the meal with more more complex words and longer sentences might be one way to justify your expense as having led to a more sophisticated, sensual experience.