Ants

Another question I hear a lot is, "What can we learn of moral value from the ants?” Here again I will answer definitively: nothing. Nothing at all can be learned from ants that our species should even consider imitating. For one thing, all working ants are female. Males are bred and appear in the nest only once a year, and then only briefly. They are pitiful creatures with wings, huge eyes, small brains and genitalia that make up a large portion of their rear body segment. They have only one function in life: to inseminate the virgin queens during the nuptial season. They are built to be robot flying sexual missiles. Upon mating or doing their best to mate, they are programmed to die within hours, usually as victims of predators.

Many kinds of ants eat their dead -- and their injured, too. You may have seen ant workers retrieve nestmates that you have mangled or killed underfoot (accidentally, I hope), thinking it battlefield heroism. The purpose, alas, is more sinister.

As ants grow older, they spend more time in the outermost chambers and tunnels of the nest, and are more prone to undertake dangerous foraging trips. They also are the first to attack enemy ants and other intruders. Here indeed is a major difference between people and ants: While we send our young men to war, ants send their old ladies.
 

Edward O. Wilson on the marvel that are ants.

While reading the article, I had a thought. Are human societies also like superorganisms? As if he were reading my mind, Wilson answered that exact question three paragraphs later.

You may occasionally hear human societies described as superorganisms. This is a bit of a stretch. It is true that we form societies dependent on cooperation, labor specialization and frequent acts of altruism. But where social insects are ruled almost entirely by instinct, we base labor division on transmission of culture. Also, unlike social insects, we are too selfish to behave like cells in an organism. Human beings seek their own destiny. They will always revolt against slavery, and refuse to be treated like worker ants.

Presenter's paradox

The problem, in a nutshell, is this: We assume when we present someone with a list of our accomplishments (or with a bundle of services or products), that they will see what we’re offering additively. If going to Harvard, a prestigious internship, and mad statistical skills are all a “10” on the scale of impressiveness, and two semesters of Spanish is a “2,” then we reason that added together, this is a 10 + 10 + 10 + 2, or a “32” in impressiveness. So it makes sense to mention your minimal Spanish skills — they add to the overall picture. More is better.

Only more is not in fact better to the interviewer (or the client or buyer), because this is not how other people see what we’re offering. They don’t add up the impressiveness, they average it. They see the Big Picture — looking at the package as a whole, rather than focusing on the individual parts.

To them, this is a (10+ 10+ 10+ 2)/4 package, or an “8” in impressiveness. And if you had left off the bit about Spanish, you would have had a (10 + 10+ 10)/3, or a “10” in impressiveness. So even though logically it seems like a little Spanish is better than none, mentioning it makes you a less attractive candidate than if you’d said nothing at all.

More is actually not better, if what you are adding is of lesser quality than the rest of your offerings. Highly favorable or positive things are diminished or diluted in the eye of the beholder when they are presented in the company of only moderately favorable or positive things.
 

And that is presenter's paradox. Based on the PR blowback, it seems like the free U2 album was an example of that subtraction by addition at the latest Apple keynote.

Optimal robot personality

They gave them distinct personalities. It was an experiment: would humans react to the robots differently based on how they carried themselves? They tried out two different personalities on each robot. One version was extraverted; the robot would speak loudly and quickly, use more animated hand gestures, and start conversations instead of waiting to be spoken to. The other personality was more reserved, speaking much more slowly and quietly, moving around less, and letting the user initiate communication.

What the researchers found, as they described in a recently published paper, was a striking difference between the two. When it came to the nurse robot, people preferred and trusted it more when its personality was outgoing and assertive. What people wanted in a security guard was exactly the opposite: the livelier, extraverted version clearly rubbed people the wrong way. Not only were they less confident in its abilities, and dubious that it would keep them away from danger, they simply liked it less overall.

...

What researchers are finding is that it’s not enough for a machine to have an agreeable personality—it needs the right personality. A robot designed to serve as a motivational exercise coach, for instance, might benefit from being more intense than a teacher-robot that plays chess with kids. A museum tour guide robot might need to be less indulgent than a personal assistant robot that’s supposed to help out around the house.

A growing body of research is starting to reveal what works and what doesn’t. And although building truly human-like robots will probably remain technologically impossible for a long time to come, researchers say that imbuing machines with personalities we can understand doesn’t require them to be “human-like” at all. To hear them describe the future is to imagine a world—one coming soon—in which we interact and even form long-term relationships with socially gifted devices that are designed to communicate with us on our terms. And what the ideal machine personalities turn out to be may expose needs and prejudices that we’re not even aware we have.
 

More here. How many of our personality preferences for robots will we inherit from the human analogues we're most familiar with? We may wish for a robot personal trainer to be tough, forceful, while we may prefer a calm, almost flat affect from our robot therapist.

Regardless, I'm excited to see the first generation of robots or AI's with personality roll out to the world. It feels like one of the most likely vectors of delight in user experience design when it comes to AI.

Young blood

A researcher has discovered that the blood of the young may have regenerative powers for the elderly.

Her son’s research has found that blood from young mice can improve the learning and memory of old ones, and she’s certainly not the only one to wonder what this could mean for humans. In his lab at UCSF and his postdoc lab at Stanford, Villeda and colleagues injected old mice with blood plasma from young mice, and vice versa. They found that the senescent rodents learned quicker and grew more neurons after infusions from young blood, while the juvenile mice got correspondingly worse at learning new tricks.
 

Perhaps this is what turns us to vampirism. Or not. As the article mentions later, any treatment developed based on such studies would depend on synthetic proteins, not blood from live humans.

At best, it provides a new metaphoric through line for the vampire genre which has become so prevalent it has grown stale. 

As Gosselink points out, old age was so rare in less-developed societies that people who achieved it were granted a certain amount of status and even a mystical cachet. Later, the elderly might have been mocked or isolated, but age was still not seen as an illness. It’s only in recent centuries, as old age has become more and more commonplace, that we have started to venerate youth; ageing is now associated not with fortunate longevity but with decrepitude and disease. And accordingly, our magical thinking has expanded to find mystical cures for loss of vitality. That’s why a strange light appears in people’s eyes when they hear about the mouse blood experiment. We are culturally primed to look to sympathetic magic as a means for curing what ails us, and old age is now regarded as a disease to be fought.