Decoding restaurants

Last year, on the fiftieth anniversary of restaurant desegregation, we celebrated a signifying moment in the long march toward full and equal citizenship for black Americans. But we delude ourselves if we don’t acknowledge that there is a difference between being admitted and being welcomed.
 
The court order that ended desegregation stipulated that every cafe, tavern, Waffle House, and roadside joint must open its doors to all. It did not, could not, stipulate that whites in the South must also open their hearts and minds to all. Welcome was, and is, the final barrier to racial parity.
 
We have witnessed remarkable progress over the past five decades, yes, and we should acknowledge this, too. What seemed fanciful, even utopian, a generation ago is now so commonplace as to not bear any comment at all. We have come to expect and accept black and white in the workplace, on the playing field, in politics, in the military, and we congratulate ourselves on our steady march to racial harmony. But our neighborhoods and our restaurants do not look much different today than they did fifty years ago. That Kingly vision of sitting down at the same table together and breaking bread is as smudgy as it’s ever been.
 

Todd Kliman set out to try to understand why, decades after desegregation, so few restaurants host a mixed clientele of black and white. Of course, the issues is about more than just restaurants. The questions he asks and the theories he uncovers can be pointed at bars, clubs, neighborhoods, and schools.

It was a man named Andy Shallal who helped me to understand the possibilities for a better, more integrated future while also reinforcing the manifold problems of the present. Shallal made me understand that no one ever need say, “keep out.” That a message is embedded in the room, in the menu, in the plates and silverware, in the music, in the color scheme. That a restaurant is a network of codes. It’s a phrase that, yes, has all sorts of overtones and undertones, still, in the South. I’m using it, here, in the semiotic sense—the communication by signs and symbols and patterns.
 
I don’t see coding as inherently malicious. But we need to remember that restaurants have long existed to perpetuate a class of insiders and a class of outsiders, the better to cultivate an air of desirability. Tablecloths, waiters in jackets and ties, soft music—these are all forms of code. They all send a very specific, clear message. That is, they communicate without words (and so without incurring a legal risk or inviting criticism or censure from the public) the policy, the philosophy, the aim of the establishment.
 
Today, there are many more forms of code than the old codes of the aristocracy. Bass-thumping music. Cement floors and lights dangling from the ceiling. Tattooed cooks. But these are still forms of code. They simultaneously send an unmistakable signal to the target audience and repel all those who fall outside that desired group.
 

The same codes are at work in websites and applications, though they often act subconsciously. Color, typography, imagery, layout, and so many other aspects of the user experience make different users feel more welcome than others.

Is your service more welcoming to the old or the young? Women or men? One ethnicity or another? The rich or the poor? The tech savvy or those less so? Those with fast internet access or those without? The visually inclined or the more textually focused? To new users or longtime users? The famous or the not-so-famous? Content creators or consumers?

It's rare the service that is perfectly neutral.

Universal sign language

“Decide” is what is known as a telic verb—that is, it represents an action with a definite end. By contrast, atelic verbs such as “negotiate” or “think” denote actions of indefinite duration. The distinction is an important one for philosophers and linguists. The divide between event and process, between the actual and the potential, harks back to the kinesis and energeia of Aristotle’s metaphysics.
 
One question is whether the ability to distinguish them is hard-wired into the human brain. Academics such as Noam Chomsky, a linguist at the Massachusetts Institute of Technology, believe that humans are born with a linguistic framework onto which a mother tongue is built. Elizabeth Spelke, a psychologist up the road at Harvard, has gone further, arguing that humans inherently have a broader “core knowledge” made up of various cognitive and computational capabilities. 
 
...
 
In 2003 Ronnie Wilbur, of Purdue University, in Indiana, noticed that the signs for telic verbs in American Sign Language tended to employ sharp decelerations or changes in hand shape at some invisible boundary, while signs for atelic words often involved repetitive motions and an absence of such a boundary. Dr Wilbur believes that sign languages make grammatical that which is available from the physics and geometry of the world. “Those are your resources to make a language,” she says. As such, she went on to suggest that the pattern could probably be found in other sign languages as well.
 
Work by Brent Strickland, of the Jean Nicod Institute, in France, and his colleagues, just published in the Proceedings of the National Academy of Sciences, now suggests that it is. Dr Strickland has gone some way to showing that signs arise from a kind of universal visual grammar that signers are working to.
 

Fascinating. Humans associate language with intelligence to such a strong degree, I predict the critical moment in animal rights will come when a chimp or other monkey takes the stand in an animal testing court case and uses sign language to give testimony on their own behalf.

Reading the test methodology employed in the piece, I wonder if any designers out there have done any similar studies with gestures or icons. I'm not arguing a Chomskyist position here; I doubt humans are born with some basic touchscreen gestures or base icon key in their brain's config file. This is more about second-order or learned intuition.

Or perhaps we'll achieve great voice or 3D gesture interfaces (e.g. Microsoft Kinect) before we ever settle on any standards around gestures on flat touchscreens. If you believe, like Chomsky, that humans have some language skills (both verbal and gestural) hard-wired in the brain at birth, the most human (humane? humanist?) of interfaces would be one that doesn't involve any abstractions on touchscreens but instead rely on the software we're born with.

Supposedly irrelevant factors

There is a version of this magic market argument that I call the invisible hand wave. It goes something like this. “Yes, it is true that my spouse and my students and members of Congress don’t understand anything about economics, but when they have to interact with markets. ...” It is at this point that the hand waving comes in. Words and phrases such as high stakes, learning and arbitrage are thrown around to suggest some of the ways that markets can do their magic, but it is my claim that no one has ever finished making the argument with both hands remaining still. 
 
Hand waving is required because there is nothing in the workings of markets that turns otherwise normal human beings into Econs. For example, if you choose the wrong career, select the wrong mortgage or fail to save for retirement, markets do not correct those failings. In fact, quite the opposite often happens. It is much easier to make money by catering to consumers’ biases than by trying to correct them. 
 
Perhaps because of undue acceptance of invisible-hand-wave arguments, economists have been ignoring supposedly irrelevant factors, comforted by the knowledge that in markets these factors just wouldn’t matter. Alas, both the field of economics and society are much worse for it. Supposedly irrelevant factors, or SIFs, matter a lot, and if we economists recognize their importance, we can do our jobs better. Behavioral economics is, to a large extent, standard economics that has been modified to incorporate SIFs.
 

Richard Thaler on behavioral economics. Again and again, studies have put cracks in the edifice of rational homo economicus.

SIFs exist in product design, too. The myth of the rational utility-maximizing user can be just as pernicious and misleading an assumption. If it wasn't, we wouldn't need concepts like smart defaults in apps, the design equivalent of nudges like retirement savings programs that are opt out instead of opt in.

Frictionless product design

Great post by Steve Sinofsky on the difference between minimalist and frictionless product design.

Frictionless and minimalism are related but not necessarily the same. Often they are conflated which can lead to design debates that are difficult to resolve.
 
A design can be minimal but still have a great deal of friction. The Linux command line interface is a great example of minimal design with high friction. You can do everything through a single prompt, as long as you know what to type and when. The minimalism is wonderful, but the ability to get going comes with high friction. The Unix philosophy of small cooperating tools is wonderfully minimal (every tool does a small number of things and does them well), but the learning and skills required are high friction.
 
  • Minimalist design is about reducing the surface area of an experience.
  • Frictionless design is about reducing the energy required by an experience.
     

This is a critical distinction that not many understand, but it's critical to absorb in this age where minimum viable product (MVP) and minimalist design are so in vogue. What you want isn't minimalism, it's something else. I had never come up with a concise way of encapsulating this “something else,” but Sinofsky's “frictionless design” is perfect, and his playbook of low-friction design patterns is worth internalizing for any product person.

To the untrained eye, a more minimalist design always seems like superior design, but often it leads to higher friction. A door that only swings one direction might be cleaner with no handle of any sort, but the handle is an affordance that clues a user into whether they should push or pull the door.

A lot of first generation mobile apps for well-established web products and services were so far from feature parity with their web brethren that they were arguably too minimalist. While the smaller screen size and touch interaction method forced some healthy simplification, some mobile apps lacked basic functions that users associated with the product or service, and thus the apps just seemed crippled. It's often a fine line between MVP and broken.

I download lots of mobile apps nowadays that are so in love with minimalist design I can barely figure out how to use them, and even after I learn, I've forgotten by the next session (unless it's an app that warrants daily or near daily use, a lot can be forgotten in between sessions). Lots employ gestures, which are very difficult to discover. Many employ icons without text labels and might as well be showing me hieroglyphics from ancient Egypt.

On the other hand, if you always listen to your earliest users, your power users, you often end up with a product that just gets more and more bloated and complex over time. I remember hearing someone say once that users only use 20 percent of the features in Excel, but every users uses a different 20 percent.

Low end disruption theory (one of the two dominant variants of the theory) says that products and services can grow to the point where they over-serve a market. At that point, a simpler, lesser featured, lower-cost competitor comes in to steal share from underneath you.

Since so many of the world's largest apps and services today are free, the cost advantage doesn't come into play as much when it comes to low end disruption in software, but products and services can absolutely over-serve on features. Instead of a price umbrella that competitors sneak in under, it's a complexity umbrella that leaves market openings for more intuitive customer experiences.

Lower friction design matters, and today the returns are higher than they've ever been. If you can build a lower friction mousetrap, it's often enough to create a new market even if the incumbent can do everything you do and more; in fact, it's often because they do more that you create a new market. Software is so powerful it can solve almost any problem, but don't forget the cost side of the ledger. Until we shift into a new design medium that isn't based around functionality mapped to icons and menus in screen real estate 2 , added functionality all comes with a tax on user comprehension.

  1. The most promising of these alternate design paradigms in the near term is text. In the medium term: voice. In the longer term, the most promising I've seen is virtual reality. All of those can map to behaviors humans learn from a very early age and thus the learning curve is dramatically flattened. Furthermore, screen real estate is less of a limiting factor.

Design theater

You've heard of security theater. As Bruce Schneier, who coined the term, defines it:

Security theater refers to security measures that make people feel more secure without doing anything to actually improve their security. An example: the photo ID checks that have sprung up in office buildings. No-one has ever explained why verifying that someone has a photo ID provides any actual security, but it looks like security to have a uniformed guard-for-hire looking at ID cards. Airport-security examples include the National Guard troops stationed at US airports in the months after 9/11 -- their guns had no bullets. The US colour-coded system of threat levels, the pervasive harassment of photographers, and the metal detectors that are increasingly common in hotels and office buildings since the Mumbai terrorist attacks, are additional examples.
 

Karen Levy and Tim Hwang argue we're going to see the rise of design theater.

Here’s a speculation of science fiction that is rapidly manifesting into a real nuts-and-bolts design debate with wide-ranging implications: should self-driving cars have steering wheels?
 
The corporate battle lines are already being drawn on this particular issue. Google announced its autonomous car prototype last year, drawing much attention for its complete absence of a steering wheel. The reason for this radical departure? The car simply “didn’t need them.
 
...
 
Take a step back: a steering wheel implies a need to steer, something that the autonomous car is designed specifically to eliminate. In a near future of safe autonomous driving technologies, the purpose of the steering wheel is largely talismanic. More than actually serving any practical function, the steering wheel seems bound to become a mere comfort blanket to assuage the fears of the driver.
 
This is a classic problem. Consumers refuse to adopt a new technology if it visibly disempowers them or departs radically from trusted patterns of practice. This is the case even when the system is better at a task than a human operator — as in the case of the self-driving car, which is safer than a human driver.
 

I'm not so sure a steering wheel is superfluous given what I know of self-driving cars today. Many situations can't be handled by those cars now, and may not be easy to handle for many many years, so I suspect most self-driving cars will need a steering wheel to allow manual takeover in such situations.

That aside, the piece is a fantastic read. I loved this link to an article about a car proposal from 1899 car that included a giant wooden horse head stuck on the front of the vehicle. Remember, if you ask users what they want, they'll say they want a faster vehicle with a wooden horse head in front.

The first hood ornament: a giant horse head? Maybe in The Godfather someone was just trying to tell Jack Woltz they stole his car?

Not all design theater is nefarious. The authors elaborate:

How should we think about the ethics of design theater? Our initial reaction might be that misleading consumers about the nature of a technology is always wrong. In lots of areas, we enforce the idea that people have a right to know what they’re buying (consider rules about honest packaging and labeling, from knowing what ingredients are in our food to being informed about the possible health consequences of exposure to certain substances). But just as humans’ front stage performances are necessary for social life to function, it’s important for technologies to integrate into social life in ways that make them usable and understandable. Though some designers find skeuomorphism ugly or aesthetically inauthentic, it’s tough to find a serious ethical problem with a design feature that’s genuinely intended to guide usability.
 
There also doesn’t seem to be a tremendous ethical problem with theaters designed for certain laudable social purposes, like safety and protection. Nothing makes this clearer than artificial engine noise. Because modern electric cars are so much quieter than their internal-combustion predecessors, it’s much harder for pedestrians to hear them approaching. Since we’re used to listening for engine noise as a safety cue, a silent vehicle can more readily “sneak up” on us and cause accidents. Over time, if all vehicles become silent, many of us would no doubt lose this subconscious reliance — but the consequences of losing the cue altogether can be very dangerous in the shorter term, especially for pedestrians with visual impairments.