10 browser tabs

1. Love in the Time of Robots

“Is it difficult to play with her?” the father asks. His daughter looks to him, then back at the android. Its mouth begins to open and close slightly, like a dying fish. He laughs. “Is she eating something?”
 
The girl does not respond. She is patient and obedient and listens closely. But something inside is telling her to resist. 
 
“Do you feel strange?” her father asks. Even he must admit that the robot is not entirely believable.
 
Eventually, after a few long minutes, the girl’s breathing grows heavier, and she announces, “I am so tired.” Then she bursts into tears.
 
That night, in a house in the suburbs, her father uploads the footage to his laptop for posterity. His name is Hiroshi Ishi­guro, and he believes this is the first record of a modern-day android.
 

Reads like the treatment for a science fiction film, some mashup of Frankenstein, Pygmalion, and Narcissus. One incredible moment after another, and I'll grab just a few excerpts, but the whole thing is worth reading.

But he now wants something more. Twice he has witnessed others have the opportunity, however confusing, to encounter their robot self, and he covets that experience. Besides, his daughter was too young, and the newscaster, though an adult, was, in his words, merely an “ordinary” person: Neither was able to analyze their android encounter like a trained scientist. A true researcher should have his own double. Flashing back to his previous life as a painter, Ishi­guro thinks: This will be another form of self-portrait. He gives the project his initials: Geminoid HI. His mechanical twin.
 

Warren Ellis, in a recent commencement speech delivered at the University of Essex, said:

Nobody predicted how weird it’s gotten out here.  And I’m a science fiction writer telling you that.  And the other science fiction writers feel the same.  I know some people who specialized in near-future science fiction who’ve just thrown their hands up and gone off to write stories about dragons because nobody can keep up with how quickly everything’s going insane.  It’s always going to feel like being thrown in the deep end, but it’s not always this deep, and I’m sorry for that.
 

The thing is, far future sci-fi is likely to be even more off base now given how humans are evolving in lock step with the technology around them. So we need more near future sci-fi, of a variety smarter than Black Mirror, to grapple with the implications.

Soon his students begin comparing him to the Geminoid—“Oh, professor, you are getting old,” they tease—and Ishi­guro finds little humor in it. A few years later, at 46, he has another cast of his face made, to reflect his aging, producing a second version of HI. But to repeat this process every few years would be costly and hard on his vanity. Instead, Ishi­guro embraces the logi­cal alternative: to alter his human form to match that of his copy. He opts for a range of cosmetic procedures—laser treatments and the injection of his own blood cells into his face. He also begins watching his diet and lifting weights; he loses about 20 pounds. “I decided not to get old anymore,” says Ishi­guro, whose English is excellent but syntactically imperfect. “Always I am getting younger.”
 
Remaining twinned with his creation has become a compulsion. “Android has my identity,” he says. “I need to be identical with my android, otherwise I’m going to lose my identity.” I think back to another photo of his first double’s construction: Its robot skull, exposed, is a sickly yellow plastic shell with openings for glassy teeth and eyeballs. When I ask what he was thinking as he watched this replica of his own head being assembled, Ishi­guro says, perhaps only half-joking, “I thought I might have this kind of skull if I removed my face.”
 
Now he points at me. “Why are you coming here? Because I have created my copy. The work is important; android is important. But you are not interested in myself.”
 

This should be some science fiction film, only I'm not sure who our great science fiction director is. The best examples may be too old to want to look upon such a story as anything other than grotesque and horrific.

2. Something is wrong on the internet by James Bridle

Of course, some of what's on the internet really is grotesque and horrific. 

Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level. 
 

Given how much my nieces love watching product unwrapping and Peppa the Pig videos on YouTube, this story was induced a sense of dread I haven't felt since the last good horror film I watched, which I can't remember anymore since the world has run a DDOS on my emotions.

We often think of a market operating at peak efficiency as sending information back and forth between supply and demand, allowing the creation of goods that satisfy both parties. In the tech industry, the wink-wink version of that is saying that pornography leads the market for any new technology, solving, as it does, the two problems the internet is said to solve better, at scale, than any medium before it: loneliness and boredom.

Bridle's piece, however, finds the dark cul-de-sacs and infected runaway processes which have branched out from the massive marketplace that is YouTube. I decided to follow a Peppa the Pig video on the service and started tapping on Related Videos, like I imagine one of my nieces doing, and quickly wandered into a dark alleyway where I saw some video which I would not want any of them watching. As Bridle did, I won't link to what I found; suffice to say it won't take you long to stumble on some of it if you want, or perhaps even if you don't.

What's particularly disturbing is the somewhat bizarre, inexplicably grotesque nature of some of these video remixes. David Cronenberg is known for his body horror films; these YouTube videos are like some perverse variant of that, playing with popular children's iconography.

Facebook and now Twitter are taking heat for disseminating fake news, and that is certainly a problem worth debating, but with that problem we're talking about adults. Children don't have the capacity to comprehend what they're seeing, and given my belief in the greater effect of sight, sound, and motion, I am even more disturbed by this phenomenon.

A system where it's free to host videos to a global audience, where this type of trademark infringement weaponizes brand signifiers with seeming impunity, married with increasingly scalable content production and remixes using technology, allows for the type of scalable problem we haven't seen before.

The internet has enabled all types of wonderful things at scale; we should not be surprised that it would foster the opposite. But we can, and should, be shocked.

3. FDA approves first blood sugar monitor without finger pricks

This is exciting. One view which seems to be common wisdom these days when it comes to health is that it's easier to lose weight and impact your health through diet than exercise. But one of the problems of the feedback loop in diet (and exercise, actually) is how slow it is. You sneak a few snacks here and there walking by the company cafeteria every day, and a month later you hop on the scale and emit a bloodcurdling scream as you realize you've gained 8 pounds.

A friend of mine had gestational diabetes during one of her pregnancies and got a home blood glucose monitor. You had to prick your finger and draw blood to get your blood glucose reading, but curious, I tried it before and after a BBQ.

To see what various foods did to my blood sugar in near real-time was a real eye-opener. Imagine in the future when one could see what a few french fries and gummy bears did to your blood sugar, or when the reading could be built into something like an Apple Watch, without having to draw blood each time. I don't mind the sight of blood, but I'd prefer not to turn my finger tips into war zones.

Faster feedback might transform dieting into something more akin to deliberate practice. Given that another popular theory of obesity is that it's an insulin phenomenon, tools like this, built for diabetes, might have much mass market impact.

4.  Ingestable ketones

Ingestable ketones have been a recent sort of holy grail for endurance athletes, and now HVMN is bringing one to market. Ketogenic diets are all the rage right now, but for an endurance athlete, the process of being able to fuel oneself on ketones has always sounded like a long and miserable process.

The body generates ketones from fat when low on carbs or from fasting. The theory is that endurance athletes using ketones rather than glycogen from carbs require less oxygen and thus can work out longer.

I first heard about the possibility of exogenous ketones for athletes from Peter Attia. As he said then, perhaps the hardest thing about ingesting exogenous ketones is the horrible taste, which caused him to gag and nearly vomit in his kitchen. It doesn't sound like the taste problem has been solved.

Until we get the pill that renders exercise obsolete, however, I'm curious to give this a try. If you decide to pre-order, you can use my referral code to get $15 off.

5. We Are Nowhere Close to the Limits of Athletic Performance

By comparison, the potential improvements achievable by doping effort are relatively modest. In weightlifting, for example, Mike Israetel, a professor of exercise science at Temple University, has estimated that doping increases weightlifting scores by about 5 to 10 percent. Compare that to the progression in world record bench press weights: 361 pounds in 1898, 363 pounds in 1916, 500 pounds in 1953, 600 pounds in 1967, 667 pounds in 1984, and 730 pounds in 2015. Doping is enough to win any given competition, but it does not stand up against the long-term trend of improving performance that is driven, in part, by genetic outliers. As the population base of weightlifting competitors has increased, outliers further and further out on the tail of the distribution have appeared, driving up world records.
 
Similarly, Lance Armstrong’s drug-fuelled victory of the 1999 Tour de France gave him a margin of victory over second-place finisher Alex Zulle of 7 minutes, 37 seconds, or about 0.1 percent.3 That pales in comparison to the dramatic secular increase in speeds the Tour has seen over the past half century: Eddy Merckx won the 1971 tour, which was about the same distance as the 1999 tour, in a time 5 percent worse than Zulle’s. Certainly, some of this improvement is due to training methods and better equipment. But much of it is simply due to the sport’s ability to find competitors of ever more exceptional natural ability, further and further out along the tail of what’s possible.
 

In the Olympics, to take the most celebrated athletic competition, victors are celebrated with videos showing them swimming laps, tossing logs in a Siberian tundra, running through a Kenyan desert. We celebrate the work, the training. Good genes are given narrative short shrift. Perhaps we should show a picture of their DNA, just to give credit where much credit is due?

If I live a normal human lifespan, I expect to live to see special sports leagues and divisions created for athletes who've undergone genetic modification in the future. It will be the return of the freak show at the circus, but this time for real. I've sat courtside and seen people like Lebron James, Giannis Antetokounmpo, Kevin Durant, and Joel Embiid walk by me. They are freaks, but genetic engineering might produce someone who stretch our definition of outlier.

In other words, it is highly unlikely that we have come anywhere close to maximum performance among all the 100 billion humans who have ever lived. (A completely random search process might require the production of something like a googol different individuals!)
 
But we should be able to accelerate this search greatly through engineering. After all, the agricultural breeding of animals like chickens and cows, which is a kind of directed selection, has easily produced animals that would have been one in a billion among the wild population. Selective breeding of corn plants for oil content of kernels has moved the population by 30 standard deviations in roughly just 100 generations.6 That feat is comparable to finding a maximal human type for a specific athletic event. But direct editing techniques like CRISPR could get us there even faster, producing Bolts beyond Bolt and Shaqs beyond Shaq.
 

6. Let's set half a percent as the standard for statistical significance

My many-times-over coauthor Dan Benjamin is the lead author on a very interesting short paper "Redefine Statistical Significance." He gathered luminaries from many disciplines to jointly advocate a tightening of the standards for using the words "statistically significant" to results that have less than a half a percent probability of occurring by chance when nothing is really there, rather than all results that—on their face—have less than a 5% probability of occurring by chance. Results with more than a 1/2% probability of occurring by chance could only be called "statistically suggestive" at most. 
 
In my view, this is a marvelous idea. It could (a) help enormously and (b) can really happen. It can really happen because it is at heart a linguistic rule. Even if rigorously enforced, it just means that editors would force people in papers to say "statistically suggestive for a p of a little less than .05, and only allow the phrase "statistically significant" in a paper if the p value is .005 or less. As a well-defined policy, it is nothing more than that. Everything else is general equilibrium effects.
 

Given the replication crisis has me doubting almost every piece of conventional wisdom I've inherited in my life, I'm okay with this.

7. We're surprisingly unaware of when our own beliefs change

If you read an article about a controversial issue, do you think you’d realise if it had changed your beliefs? No one knows your own mind like you do – it seems obvious that you would know if your beliefs had shifted. And yet a new paper in The Quarterly Journal of Experimental Psychology suggests that we actually have very poor “metacognitive awareness” of our own belief change, meaning that we will tend to underestimate how much we’ve been swayed by a convincing article.
 
The researchers Michael Wolfe and Todd Williams at Grand Valley State University said their findings could have implications for the public communication of science. “People may be less willing to meaningfully consider belief inconsistent material if they feel that their beliefs are unlikely to change as a consequence,” they wrote.
 

Beyond being an interesting result, I link to this as an example of a human readable summary of a research paper. This his how this article summarize the research study and its results:

The researchers recruited over two hundred undergrads across two studies and focused on their beliefs about whether the spanking/smacking of kids is an effective form of discipline. The researchers chose this topic deliberately in the hope the students would be mostly unaware of the relevant research literature, and that they would express a varied range of relatively uncommitted initial beliefs.
 
The students reported their initial beliefs about whether spanking is an effective way to discipline a child on a scale from “1” completely disbelieve to “9” completely believe. Several weeks later they were given one of two research-based texts to read: each was several pages long and either presented the arguments and data in favour of spanking or against spanking. After this, the students answered some questions to test their comprehension and memory of the text (these measures varied across the two studies). Then the students again scored their belief in whether spanking is effective or not (using the same 9-point scale as before). Finally, the researchers asked them to recall what their belief had been at the start of the study.
 
The students’ belief about spanking changed when they read a text that argued against their own initial position. Crucially, their memory of their initial belief was shifted in the direction of their new belief – in fact, their memory was closer to their current belief than their original belief. The more their belief had changed, the larger this memory bias tended to be, suggesting the students were relying on their current belief to deduce their initial belief. The memory bias was unrelated to the measures of how well they’d understood or recalled the text, suggesting these factors didn’t play a role in memory of initial belief or awareness of belief change.
 

Compare this link above to the abstract of the paper itself:

When people change beliefs as a result of reading a text, are they aware of these changes? This question was examined for beliefs about spanking as an effective means of discipline. In two experiments, subjects reported beliefs about spanking effectiveness during a prescreening session. In a subsequent experimental session, subjects read a one-sided text that advocated a belief consistent or inconsistent position on the topic. After reading, subjects reported their current beliefs and attempted to recollect their initial beliefs. Subjects reading a belief inconsistent text were more likely to change their beliefs than those who read a belief consistent text. Recollections of initial beliefs tended to be biased in the direction of subjects’ current beliefs. In addition, the relationship between the belief consistency of the text read and accuracy of belief recollections was mediated by belief change. This belief memory bias was independent of on-line text processing and comprehension measures, and indicates poor metacognitive awareness of belief change.
 

That's actually one of the better research abstracts you'll read and still it reflects the general opacity of the average research abstract. I'd argue that some of the most important knowledge in the world is locked behind abstruse abstracts.

Why do researchers write this way? Most tell me that researchers write for other researchers, and incomprehensible prose like this impresses their peers. What a tragedy. As my longtime readers know, I'm a firm believer in the power of the form of a message. We continue to underrate that in all aspects of life, from the corporate world to our personal lives, and here, in academia.

Then again, such poor writing keeps people like Malcolm Gladwell busy transforming such insight into breezy reads in The New Yorker and his bestselling books.

8. Social disappointment explains chimpanzees' behaviour in the inequity aversion task

As an example of the above phenomenon, this paper contains an interesting conclusion, but try to parse this abstract:

Chimpanzees’ refusal of less-preferred food when an experimenter has previously provided preferred food to a conspecific has been taken as evidence for a sense of fairness. Here, we present a novel hypothesis—the social disappointment hypothesis—according to which food refusals express chimpanzees' disappointment in the human experimenter for not rewarding them as well as they could have. We tested this hypothesis using a two-by-two design in which food was either distributed by an experimenter or a machine and with a partner present or absent. We found that chimpanzees were more likely to reject food when it was distributed by an experimenter rather than by a machine and that they were not more likely to do so when a partner was present. These results suggest that chimpanzees’ refusal of less-preferred food stems from social disappointment in the experimenter and not from a sense of fairness.
 

Your average grade school English teacher would slap a failing grade on this butchery of the English language.

9. Metacompetition: Competing Over the Game to be Played

When CDMA-based technologies took off in the US, companies like QualComm that work on that standard prospered; metacompetitions between standards decide the fates of the firms that adopt (or reject) those standards.

When an oil spill raises concerns about the environment, consumers favor businesses with good environmental records; metacompetitions between beliefs determine the criteria we use to evaluate whether a firm is “good.”

If a particular organic foods certification becomes important to consumers, companies with that certification are favored; metacompetitions between certifications determines how the quality of firms is measured.
 
In all these examples, you could be the very best at what you do, but lose in the metacompetition over what criteria will matter. On the other hand, you may win due to a metacompetition that protects you from fierce rivals who play a different game.
 
Great leaders pay attention to metacompetition. They advocate the game they play well, promoting criteria on which they measure up. By contrast, many failed leaders work hard at being the best at what they do, only to throw up their hands in dismay when they are not even allowed to compete. These losers cannot understand why they lost, but they have neglected a fundamental responsibility of leadership. It is not enough to play your game well. In every market in every country, alternative “logics” vie for prominence. Before you can win in competition, you must first win the metacompetition over the game being played.
 

In sports negotiations between owners and players, the owners almost always win the metacompetition game. In the writer's strike in Hollywood in 2007, the writer's guild didn't realize they were losing the metacompetition and thus ended up worse off than before. Amazon surpassed eBay by winning the retail metacompetition (most consumers prefer paying a good, fixed price for a good of some predefined quality than dealing with the multiple axes of complexity of an auction) after first failing at tackling eBay on its direct turf of auctions.

Winning the metacompetition means first being aware of what it is. It's not so easy in a space like, say, social networking, where even some of the winners don't understand what game they're playing.

10. How to be a Stoic

Much of Epictetus’ advice is about not getting angry at slaves. At first, I thought I could skip those parts. But I soon realized that I had the same self-recriminatory and illogical thoughts in my interactions with small-business owners and service professionals. When a cabdriver lied about a route, or a shopkeeper shortchanged me, I felt that it was my fault, for speaking Turkish with an accent, or for being part of an élite. And, if I pretended not to notice these slights, wasn’t I proving that I really was a disengaged, privileged oppressor? Epictetus shook me from these thoughts with this simple exercise: “Starting with things of little value—a bit of spilled oil, a little stolen wine—repeat to yourself: ‘For such a small price, I buy tranquillity.’ ”
 
Born nearly two thousand years before Darwin and Freud, Epictetus seems to have anticipated a way out of their prisons. The sense of doom and delight that is programmed into the human body? It can be overridden by the mind. The eternal war between subconscious desires and the demands of civilization? It can be won. In the nineteen-fifties, the American psychotherapist Albert Ellis came up with an early form of cognitive-behavioral therapy, based largely on Epictetus’ claim that “it is not events that disturb people, it is their judgments concerning them.” If you practice Stoic philosophy long enough, Epictetus says, you stop being mistaken about what’s good even in your dreams.
 

The trendiness of stoicism has been around for quite some time now. I found this tab left over from 2016, and I'm sure Tim Ferriss was espousing it long before then, and not to mention the enduring trend that is Buddhism. That meditation and stoicism are so popular in Silicon Valley may be a measure of the complacency of the region; these seem direct antidotes to the most first world of problems. People everywhere complain of the stresses on their mind from the deluge of information they receive for free from apps on the smartphone with processing power that would put previous supercomputers to shame.

Still, given that stoicism was in vogue in Roman times, it seems to have stood the test of time. Since social media seems to have increased the surface area of our social fabric and our exposure to said fabric, perhaps we could all use a bit more stoicism in our lives. I suspect one reason Curb Your Enthusiasm curdles in the mouth more than before is not just that his rich white man's complaints seem particularly ill timed in the current environment but that he is out of touch with the real nature of most people's psychological stressors now. A guy of his age and wealth probably doesn't spend much time on social media, but if he did, he might realize his grievances no longer match those of the average person in either pettiness or peculiarity.

Lesser-known trolley problem variations

Putting a stake in the popular philosophy thought exercise.

The Time Traveler
 
There’s an out of control trolley speeding towards a worker. You have the ability to pull a lever and change the trolley’s path so it hits a different worker. The different worker is actually the first worker ten minutes from now.
 
The Cancer Caper
 
There’s an out of control trolley speeding towards four workers. Three of them are cannibalistic serial killers. One of them is a brilliant cancer researcher. You have the ability to pull a lever and change the trolley’s path so it hits just one person. She is a brilliant cannibalistic serial killing cancer researcher who only kills lesser cancer researchers. 14% of these researchers are Nazi-sympathizers, and 25% don’t use turning signals when they drive. Speaking of which, in this world, Hitler is still alive, but he’s dying of cancer.
 
The Suicide Note
 
There’s an out of control trolley speeding towards a worker. You have the ability to pull a lever and change the trolley’s path so it hits a different worker. The first worker has an intended suicide note in his back pocket but it’s in the handwriting of the second worker. The second worker wears a T-shirt that says PLEASE HITME WITH A TROLLEY, but the shirt is borrowed from the first worker.
 

And so on. I'm not really sure what I can learn from the trolley problem, but I'm uncomfortable that the most common version always involves an fat guy. In fact, it's just referred to as The fat man problem!

Theory of moral standing of virtual life forms

I have killed three dogs in Minecraft. The way to get a dog is to find a wolf, and then feed bones to the wolf until red Valentine’s hearts blossom forth from the wolf, and then it is your dog. It will do its best to follow you wherever you go, and (like a real dog) it will invariably get in your way when you are trying to build something. Apart from that, they are just fun to have around, and they will even help you fight monsters. If they become too much of a nuisance, you can click on them and they will sit and wait patiently forever until you click on them again.
 
I drowned my first two dogs. The first time, I was building a bridge over a lake, but a bridge that left no space between it and the water. The dog did its best to follow me around, but it soon found itself trapped beneath the water’s surface by my bridge. Not being smart enough to swim out from under the bridge, it let out a single plaintiff yelp before dying and sinking. Exactly the same thing happened to my second dog, as it was this second episode that made this particular feature of dogs clear to me. I know now to make dogs sit if I’m building bridges. I’m not sure what happened to the third dog, but I think it fell into some lava. There was, again, the single yelp, followed by a sizzle. No more dog.
 
I felt bad each time, while of course fully realizing that only virtual entities were being killed. 
 

From an excerpt from Charlie Huenemann's How You Play the Game: A Philosopher Plays Minecraft. It is true, we humans are wired to form attachments, even to inanimate objects or virtual life forms which have no feelings. We visit the places of our youth and experience almost nauseating waves of nostalgia. At an objective level, these seem irrational, even as we recognize the tendency in ourselves. What is the value of these feelings? Is there an evolutionary purpose? Are they a form of rehearsal for our attachments to living things, or maybe just a reflex we can't turn off selectively?

The point is that we form attachments to things that may have no feelings or rights whatsoever, but by forming attachments to them, they gain some moral standing. If you really care about something, then I have at least some initial reason to be mindful of your concern. (Yes, lots of complications can come in here – “What if I really care for the fire that is now engulfing your home?” – but the basic point stands: there is some initial reason, though not necessarily a final or decisive one.) I had some attachment to my Minecraft dogs, which is why I felt sorry when they died. Had you come along in a multiplayer setting and chopped them to death for the sheer malicious pleasure of doing so, I could rightly claim that you did something wrong.
 
Moreover, we can also speak of attachments – even to virtual objects – that we should form, just as part of being good people. Imagine if I were to gain a Minecraft dog that accompanied me on many adventures. I even offer it rotten zombie flesh to eat on several occasions. But then one day I tire of it and chop it into nonexistence. I think most of would be surprised: “Why did you do that? You had it a long time, and even took care of it. Didn’t you feel attached to it?” Suppose I say, “No, no attachment at all”. “Well, you should have”, we would mumble. It just doesn’t seem right not to have felt some attachment, even if it was overcome by some other concern. “Yes, I was attached to it, but it was getting in the way too much”, would have been at least more acceptable as a reply. (“Still, you didn’t have to kill it. You could have just clicked on it to sit forever….”)

Facebook and Plato's cave

Plato is a great philosopher of information without the word being there. When it comes to the classic image of the myth of the cave, you can reinterpret the whole thing today in terms of the channel of communication and information theory: who gets access to which information. The people chained in front of the wall are effectively watching television, or glued to some social media. You can read it that way without doing any violence to the text. That shows two things. First, why it is a classic. A classic can be read and re-read, and re-interepreted. It never gets old, it just gets richer in consequences. It’s like old wine, it gets better with time. You can also see what I mean when I say we’ve been doing the philosophy of information since day one, because really the whole discussion of the cave is just a specific chapter in the philosophy of information. The point I try to glean from that particular feature in the great architecture of the Republic is the following: some people have their attention captured constantly by social media – it could be by cats on Facebook. They are chained to that particular social media – television yesterday, digital technology today. Some of these people can actually unchain themselves and acquire a better sense of what reality is, what the world really is about. What is the responsibility of those who have, as it were, unchained themselves from the constant flow, the constant grab of attention of everyday media, and are able to step back, literally step out of the cave? Are they supposed to go back and violently force the people inside to get away, as the text says? Updated that would mean, for example, implementing legislation. We would have to ban social media, we could forbid people from having mobile phones, we’d put some kind of back doors into social media because we want control. Or do we have to exercise toleration? If so, it would be a matter of education. We’d have to go back and talk to them. In essence here Plato, by addressing these questions, is giving us a lesson in the philosophy of information. 

From an interview of Luciano Floridi, Oxford Professor of Philosophy and Ethics of Information and a member of Google's advisory council around the “right to be forgotten” court case in Europe.

People addicted to social media as the people chained in front of the wall in Plato's cave allegory. I wish I'd thought of that.

Positive versus negative rights

 

Are you skeptical of the idea of universal human rights?

No, I’m not skeptical about the idea of universal human rights. I’m skeptical about what I call positive rights. You see, if you look at the logical structure of rights, every right implies an obligation on someone else’s part. A right is always a right against somebody. If I have a right to park my car in your driveway, then you have an obligation not to interfere with my parking my car in your driveway. Now the idea of universal human rights is a remarkable idea because if there are such things, then all human beings are under an obligation to do—what? Well, I want to say that with things like the right to free speech it just means not to interfere. It’s a negative right. My right to free speech means I have a right to exercise my free speech without being interfered with. And that means that other people are under an obligation not to interfere with me.

Now, when I look at the literature, I discover that there is a tradition going back to the UN Universal Declaration of Human Rights, where not all of the rights listed are negative rights like the right to free speech, or the right to freedom of religion, or the right to freedom of association, I think all those negative rights are perfectly legitimate. But there are supposed to be such rights as “every human being has a right to adequate housing.” Now I don’t think that can be made into a meaningful claim.

The claim that “every human being has a right to seek adequate housing,” or that there are particular jurisdictions where the British government, or the government of the State of California, can decide “we’re going to guarantee or give that right to all of our citizens”—that seems to me OK. But the idea that every human being, just in virtue of being a human being, has a right to adequate housing in a way that would impose an obligation on every other human being to provide that housing, that seems to me nonsense. So I say that you can make a good case for universal human rights of a negative kind, but that you cannot make the comparable case for universal human rights of a positive kind.
 

My favorite bit from an interview with philosopher John Searle.

The trolley problem and self-driving cars

The trolley problem is a famous thought experiment in philosophy.

You are walking near a trolley-car track when you notice five people tied to it in a row. The next instant, you see a trolley hurtling toward them, out of control. A signal lever is within your reach; if you pull it, you can divert the runaway trolley down a side track, saving the five — but killing another person, who is tied to that spur. What do you do? Most people say they would pull the lever: Better that one person should die instead of five.
 
Now, a different scenario. You are on a footbridge overlooking the track, where five people are tied down and the trolley is rushing toward them. There is no spur this time, but near you on the bridge is a chubby man. If you heave him over the side, he will fall on the track and his bulk will stop the trolley. He will die in the process. What do you do? (We presume your own body is too svelte to stop the trolley, should you be considering noble self-sacrifice.)

In numerical terms, the two situations are identical. A strict utilitarian, concerned only with the greatest happiness of the greatest number, would see no difference: In each case, one person dies to save five. Yet people seem to feel differently about the “Fat Man” case. The thought of seizing a random bystander, ignoring his screams, wrestling him to the railing and tumbling him over is too much. Surveys suggest that up to 90 percent of us would throw the lever in “Spur,” while a similar percentage think the Fat Man should not be thrown off the bridge. Yet, if asked, people find it hard to give logical reasons for this choice. Assaulting the Fat Man just feels wrong; our instincts cry out against it.

Nothing intrigues philosophers more than a phenomenon that seems simultaneously self-evident and inexplicable. Thus, ever since the moral philosopher Philippa Foot set out Spur as a thought experiment in 1967, a whole enterprise of “trolley­ology” has unfolded, with trolleyologists generating ever more fiendish variants.
 

There are entire books devoted entirely to the subject, including the humorously titled The Trolley Problem, or Would You Throw the Fat Guy Off the Bridge: A Philosophical Conundrum or the similarly named Would You Kill the Fat Man?: The Trolley Problem and What Your Answer Tells Us about Right and Wrong. If the obese don't have enough problems, they also stumble into philosophical quandaries merely by walking across bridges at inopportune moments.

In the abstract, the trolley problem can seem frivolous. In the real world, however, such dilemmas can prove very real and complex. Just around the corner lurks a technological breakthrough which will force us to confront the trolley problem once again: the self-driving car.

Say you're sitting by yourself in your self-driving car, just playing aimlessly on your phone while your car handles the driving duties, when suddenly a mother and child step out in front of the car from between two parked cars on the side of the road. The self-driving car doesn't have enough time to brake, and if it swerves to avoid the mother and child, the car will fly off a bridge and throw you to certain death. What should the car's driving software be programmed to do in that situation?

That problem is the subject of an article in Aeon on automated ethics.

A similar computer program to the one driving our first tram would have no problem resolving this. Indeed, it would see no distinction between the cases. Where there are no alternatives, one life should be sacrificed to save five; two lives to save three; and so on. The fat man should always die – a form of ethical reasoning called consequentialism, meaning conduct should be judged in terms of its consequences.

When presented with Thomson’s trolley problem, however, many people feel that it would be wrong to push the fat man to his death. Premeditated murder is inherently wrong, they argue, no matter what its results – a form of ethical reasoning called deontology, meaning conduct should be judged by the nature of an action rather than by its consequences.

The friction between deontology and consequentialism is at the heart of every version of the trolley problem. Yet perhaps the problem’s most unsettling implication is not the existence of this friction, but the fact that – depending on how the story is told – people tend to hold wildly different opinions about what is right and wrong.

Pushing someone to their death with your bare hands is deeply problematic psychologically, even if you accept that it’s theoretically no better or worse than killing them from 10 miles away. Meanwhile, allowing someone at a distance – a starving child in another country for example – to die through one’s inaction seems barely to register a qualm. As philosophers such as Peter Singer have persuasively argued, it’s hard to see why we should accept this.
 

If a robot programmed with Asimov's Three Laws of Robotics were confronted with the trolley problem, what would the robot do? There are long threads dedicated to just this question.

Lots of people have already foreseen this core ethical problem with self-driving cars. I haven't seen any consensus on a solution, though. Not an easy problem, but one that we now have to wrestle with as a society.

Or, at least, some people will have to wrestle with the problem. Frankly, I'm happy today when my Roomba doesn't get itself stuck during one of its cleaning sessions.

Earning to give

From "Join Wall Street. Save the World.

Jason Trigg went into finance because he is after money — as much as he can earn.

The 25-year-old certainly had other career options. An MIT computer science graduate, he could be writing software for the next tech giant. Or he might have gone into academia in computing or applied math or even biology. He could literally be working to cure cancer.

Instead, he goes to work each morning for a high-frequency trading firm. It’s a hedge fund on steroids. He writes software that turns a lot of money into even more money. For his labors, he reaps an uptown salary — and over time his earning potential is unbounded. It’s all part of the plan.

Why this compulsion? It’s not for fast cars or fancy houses. Trigg makes money just to give it away. His logic is simple: The more he makes, the more good he can do.

He’s figured out just how to take measure of his contribution. His outlet of choice is the Against Malaria Foundation, considered one of the world’s most effective charities. It estimates that a $2,500 donation can save one life. A quantitative analyst at Trigg’s hedge fund can earn well more than $100,000 a year. By giving away half of a high finance salary, Trigg says, he can save many more lives than he could on an academic’s salary.

Fascinating article.

Among other tidbits of note: GiveWell, which analyzes charities for actual effectiveness in changing lives with each of your donated dollars, rates Against Malaria Foundation as the single most worthy charity. Their second ranked charity is GiveDirectly, which essentially just transfers your donation to a poor household in Kenya. They have only 2 employees and 8% overhead.

As for Peter Singer, he is fully supportive of earning to give. 

And [Singer] embraces earning-to-give as among the most ethical career choices one can make, more moral than his own, even. “There is a relatively small group of philosophers who actually have a big influence,” he says from his home in Australia. “But otherwise, the marginal difference that you’re going to make as a professor of philosophy compared to somebody else is not all that great.”

Some further discussion of this article here

The veil of opulence

At some point in the past decade, the explosion of fantasy leagues and the increasingly hostile and combative tone of sports talk has pushed the discussion toward something ugly, irrelevant, and ultimately boring. "Joey Votto sucks/rules" (a fun argument based in observation and statistics) has become "Joey Votto is/isn't worth $251.5 million" (an uninformed argument about money). The core structure of sabermetric analysis — which was always based on how a team should act if they wanted to win more games, not on how a billionaire should spend his billions — got co-opted into an incomprehensible language of contracts, dollar amounts, and hastily Googled advanced statistics. The veil of opulence seems to be at the core of this shift from entertainment to imaginary business. The accounting practices of corporate leaders should not be at the center of any discussion about baseball, basketball, football, or hockey. And yet we cannot have a conversation about whether Prince Fielder is a better player than Joey Votto without heavily factoring in how much money they make. This sort of opulent reasoning, of course, only helps the owner when he makes decisions that run counter to the best interests of his customers.

That's Jay Caspian Kang writing in Grantland (Kang being one of my favorite writers on the rich Grantland roster). I was thinking the same thing recently when talking with some fellow Cubs fans about the Cubs payroll. Why should I care what the Ricketts family has to spend on the Cubs to field a winning team? I just want the Cubs to win a World Series in my lifetime. It's not my money, really. Even when the Tribune company were the owners, I didn't care when stories about their financial distress came public.

The only reason I've cared in the past when the Cubs overspent is that I believed they wouldn't continue to spend their way out of those problems in the future. Then it distressed me that they didn't spend wisely because they would end up in quandaries like the one they're in with Alfonso Soriano, unable to move his big contract and unwilling to just write it off. I'd be fine with the Cubs trading Soriano and eating most of the remainder of his contract, but their desire to run the Cubs as a large profit center impedes the speed of their rebuilding. While I'm elated that the Cubs have hired Theo Epstein and Jed Hoyer and an ownership that has a history of spending smarter, I'd be just as happy if the Cubs spent like the Yankees and fielded a superteam.

By the way, "veil of opulence" is a gem of a saying that Kang borrows from this op-ed by Benjamin Hale. It's well worth reading.

Where the veil of ignorance offers a test for fairness from an impersonal, universal point of view — “What system would I want if I had no idea who I was going to be, or what talents and resources I was going to have?” — the veil of opulence offers a test for fairness from the first-person, partial point of view: “What system would I want if I were so-and-so?” These two doctrines of fairness — the universal view and the first-person view — are both compelling in their own way, but only one of them offers moral clarity impartial enough to guide our policy decisions.

Hale goes on to explain why the veil of opulence fails to offer that moral clarity:

But the veil of opulence operates only under the guise of fairness. It is rather a distortion of fairness, by virtue of the partiality that it smuggles in. It asks not whether a policy is fair given the huge range of advantages or hardships the universe might throw at a person but rather whether it is fair that a very fortunate person should shoulder the burdens of others. That is, the veil of opulence insists that people imagine that resources and opportunities and talents are freely available to all, that such goods are widely abundant, that there is no element of randomness or chance that may negatively impact those who struggle to succeed but sadly fail through no fault of their own. It blankets off the obstacles that impede the road to success. It turns a blind eye to the adversity that some people, let’s face it, are born into. By insisting that we consider public policy from the perspective of the most-advantaged, the veil of opulence obscures the vagaries of brute luck.

It's a useful concept, a corrective to my tendency towards excessive empathy in an effort to be overly fair. As Hale notes, the irony is that that impulse can lead to the exact opposite result.

It seems, however, that the concept of tight correlation between individual effort and success is core to the American dream, and a huge motivational force. Spend enough time in the tech sector here in the Bay Area, and you couldn't be blamed in arguing that results in the startup or entrepreneurial space space are probabilistic. Still, continued innovation in tech may continue to depend on individuals believing that control over their entrepreneurial success is largely deterministic.