The hidden value of the NBA steal

This post by Benjamin Morris at FiveThirtyEight was one of the more interesting pieces at that site so far.

In fact, if you had to pick one statistic from the common box score to tell you as much as possible about whether a player helps or hurts his team, it isn’t how many points he scores. Nor how many rebounds he grabs. Nor how many assists he dishes out.

It’s how many steals he gets.

...

Steals have considerable intrinsic value. Not only do they kill an opponent’s possession, but a team’s ensuing possession — the one that started with the steal — often leads to fast-break scoring opportunities. But though this explains how a steal can be more valuable than a two-point basket, it doesn’t come close to explaining how we get from that to nine points.

I’ve heard a lot of different theories about how steals can be so much more predictively valuable than they seem: Steals “cost” less than other stats,7 or players who get more steals might also play better defense, or maybe steals are just a product of, as pundits like to call it, high basketball IQ. These are all worth considering and may be true to various degrees, but I think there’s a subtler — yet extremely important — explanation.

Think about all that occurs in a basketball game — no matter who is playing, there will be plenty of points, rebounds and assists to go around. But some things only happen because somebody makes them happen. If you replaced a player with someone less skilled at that particular thing, it wouldn’t just go to somebody else. It wouldn’t occur at all. Steals are disproportionately those kinds of things.
 

I haven't visited FiveThirtyEight as regularly as I thought I would. To some extent it feels a bit like a solution still in search of a problem. That is, analytic rigor with data is great, but it felt more essential as an antidote to hysteria during the elections. When it doesn't feel like you're sick, taking medicine regularly isn't as appealing.

It's still early, though. If nothing else they must certainly be analyzing the data on their traffic and engagement carefully. I personally would love to see more voice from their writers (that need not be mutually exclusive with analytical rigor) and a higher incidence of longer pieces.

We should take Skynet seriously

Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it may play out differently than in the movie: as Irving Good realized in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a "singularity" and Johnny Depp's movie character calls "transcendence." One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a text message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here -- we'll leave the lights on"? Probably not -- but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues outside small non-profit institutes such as the Cambridge Center for Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us -- not only scientists, industrialists and generals -- should ask ourselves what can we do now to improve the chances of reaping the benefits and avoiding the risks.
 

Stephen Hawking, Max Tegmark, Stuart Russell, and Frank Wilczek with a warning about what lies at the end of an extrapolation of artificial intelligence.

New Girl Talk album

Girl Talk's new album Broken Ankles, a collab with the rapper Freeway, can be downloaded at DatPiff.

An early review from Stereogum:

Free always sounded too frantic to really be having fun. And party music is all Girl Talk does. So how was this going to work?

Really well, it turns out! At six tracks, Broken Ankles lasts just under half an hour, which turns out to be just the right amount of time for both of these guys, whose respective styles can wear thin at album length. And Gillis tweaks his style just enough to bring Freeway into his world without forcing Free to become an entirely different sort of rapper. Gillis certainly piles on the samples on Broken Ankles, as you can see from this partial list of artists sampled. But the whole shock-of-recognition thing that drives his solo work is way less of a factor here; if you can pick out the Add (N) To X or Iannis Xenakis samples, go ahead and award yourself a best-listener medal. Instead, he’s taken these pieces of music and crafted them into actual songs, slowing his sound down to midtempo and stripping it back enough that it won’t get in Free’s way without really losing intensity. On a track like “I Can Hear Sweat,” there’s a ton going on: Busy drums, Nine Inch Nails synth riff, buzzing bass, great little sonar blips on the hook. But all that activity never becomes the center of attention, except maybe on the non-rapped chorus. It’s a cluttered but effective rap beat, and it never distracts from Jadakiss’s absolutely cold-blooded guest verse or from the oddly endearing and goofy moment where Free says something about “reading books by Tom Sawyer.”

Under the Skin

Two movies from the Toronto Film Festival last year really stuck in my memory. One was Gravity for its almost minimalist, allegorical story structure. Everyone leaving the theater after the premiere knew it would be a huge hit; it was accessible and recalled what movies on the big screen can do that no other medium can match. In the moment, I thought it had a very good chance to be the next Best Picture winner (which went to another movie that played the festival, 12 Years a Slave, which I couldn't get a ticket for).

However, while I enjoyed Gravity, I was haunted by Under the Skin. It was the final movie I saw at the festival, and it's the type of movie that stuns you at a film festival because a catalog description can't really do it justice; so much of its effect comes not from plot but from the cinematography, score, editing, and other elements unique to film. The hook that got me to pick the movie out of the catalog was not Scarlett Johansson as the lead, though I enjoyed her work in Ghost World and Lost in Translation, but the director, Jonathan Glazer.

As a film director, Glazer isn't prolific. Most of his output has been music videos (e.g. the music video for Karma Police by Radiohead) and commercials. Until Under the Skin, he had directed just two narrative films, Sexy Beast and Birth, and the latter came out ten years ago. Birth, in particular, had a sequence that had such beautiful, seamless melding of music and onscreen action that I analyzed it as a dance.

I said the movie couldn't be understood from its plot description, but some exposition is helpful for understanding why the movie's formal elements are so successful.

[MINOR SPOILERS AHEAD. Though the plot of the movie can't being to describe the sensation of watching it (imagine describing the plot of 2001), if you plan to see the movie, I recommend reading as little about the movie as possible]

Johansson plays an alien wearing the body of a human. In the most fascinating performance of her career, Johansson deadens all evidence of recognizable inner life both in her facial expression and physical movements. As the movie progresses, Johansson starts to let a few recognizable glimmers of humanity emerge, clueing us in to the alien's growing sympathy for the human subject.

Part of the horror of the movie comes in a mixture of some secret live footage mixed into the movie. To capture scenes of the alien seducing human men, Glazer put Johansson in a wig and behind the wheel of a truck and had her drive around Glasgow picking up strange men off the street. Glazer hid in the back of the truck listening to the audio being captured through a hidden mic while concealed cameras captured the interaction.

One could argue that if aliens came to Earth in the form of Scarlett Johansson in bright red lipstick, we'd likely be done for (well, at least the straight men in the world), but Glazer and cinematographer Daniel Landin manage to render the seduction scenes terrifying. The moments in the truck were shot from concealed cameras that couldn't be moved, and the odd and fixed camera angles are different enough from the conventional Hollywood camera angles used to film conversations in automobiles that we feel we're watching found documentary footage. The subsequent seductions take place in a seemingly black void, choreographed like some avant garde dance performance. If it's possible to make Scarlett Johansson disrobing seem nonerotic, Glazer and team come as close as one could imagine.

Much of the movie's cinematography sits in strange space between a video or documentary look and something more cinematic. Most of the lighting is natural, and many scenes are dark, even murky. To be able to shoot undetected, and with maximum freedom of movement for the actors, the cinematographer worked with a London studio to develop a custom camera that was used to shoot some of the movie's scenes. The unique look is both familiar yet unique, contributing to the movie's otherworldly feel.

The score by Mica Levi is a masterpiece. You can hear snippets from it in the trailer. It's the perfect music for the next time you plan to host an Eyes Wide Shut inspired orgy. When the score isn't playing the movie is largely dialogue-free, almost like a silent movie or a voiceover-free documentary like Leviathan. At times it evokes Ligeti, but like the other elements of the movie it is like nothing else I've heard before. 

Under the Skin is a masterpiece, but plenty walked out of the Toronto premiere befuddled or disturbed. It's not a conventional narrative, it unfolds at its own pace and cares little for evoking any familiar human emotions, so seriously does it take its mission of showing us our world through alien eyes. On that point it succeeds, too well for some people I spoke to afterwards. More than any movie I can recall, it clinically observes and presents so many of the oddities of humanity—desire, compassion, lust, and death—and its success is in making us see them for their bizarre and wondrous qualities.

Fitt's Law, the Tesla Model S, and touchscreen car interfaces

Fitts’s Law can accurately predict the time it will take a person to move their pointer, be it a glowing arrow on a screen or a finger tip attached to their hand, from its current position to the target they have chosen to hit.
 

Much more about Fitt's Law here from Tog. This bit was instructive:

Paul Fitts was not a computer guy.  He was working on military cockpit design when he discovered his famous Law. Paul Fitts never had to deal with the the issue of stability because stuff inside aircraft cockpits is inherently stable. The few things that do move only do so because the pilot moved them, as when he or she pushes a control stick to the side or advances the throttle forward.  The rest of the targets the pilot must acquire—the pressure adjustment on the altitude indicator,  the Gatling gun arm switch, the frequency dial on the radio, and fuel pump kill switch—stay exactly where they were originally installed. Everything in that cockpit is a predictable target, either always in the same place or, in the case of things like the throttle, within a fixed area and exactly where you left it. Once you become familiar with the cockpit and settle into flying the same plane hour after hour after hour, you hardly look at your intended targets at all. Your motor memory carries your hand right to the target, with touch zeroing you in.
 

I had heard of Fitt's Law but didn't know its history, and it came to mind as I was driving my Tesla Model S recently.

In almost every respect, I really love the car. I took ownership of my Model S in December 2012 after having put down a deposit over 3.5 years earlier, and I long ago stopped thinking of it as anything other than a car, perhaps the most critical leading indicator as to whether it can cross the chasm as a technology. I've forgotten what it's like to stop and pump gas (what are gas prices these days anyway?), it's roomy enough I can throw my snowboard, road bike, and other things in the back with room to spare, and I still haven't tired of occasionally flooring it and getting compressed back into my seat like I'm being propelled by a giant rubber band that has been released after being stretched to its limit. Most of all, it's still a thrill when I fire the car up to find a new software update ready to install, almost as if the Model S were a driveable iPad.

It's the ability to update the interface via software that gives me hope that a few things in the interface might be adjusted.* In a Model S, most of the controls are accessible via a giant touch screen in the center of the console. There aren't many buttons or switches except on the steering wheel which you can use to handle some of the more common actions, like changing the thermostat, adjusting volume on the sound system, skipping ahead on a musical track, and making phone calls.

When the car first came out, one of the early complaints was the lack of physical controls. I was concerned as well. Physical controls are useful because, without looking at the road, I can run my fingers across a bunch of controls to locate the one I want without activating the wrong ones by mistake as I search. With a touch screen, there is no physical contour differentiating controls, you have to actually look at the screen to hit the appropriate control, taking your eyes off of the road.

[I also confess to some nostalgia for physical controls for their aesthetics: controls in a car give physical manifestation to the functionality of a car. The more controls a car has, the more it appeals to geeks who love functionality, and physical controls also give car designers the opportunity for showing off their skills. I find many old school car dashboards quite sexy with all their knobs and switches and levers. Touch screens tend to hide all of that which has more of a minimalist appeal that may be more modern.]

In practice, I have not missed them as much as I thought I would because a lot can be operated by physical controls on the steering wheel.

However, one task that a touch screen makes difficult, in practice, is hitting a button while in a car that's in motion. It turns out that road vibration makes it very hard to keep your arm and hand steady and to hit touch targets on a touchscreen with precision. That's why I rely so much on my steering wheel controls in the Model S to do things like adjust the volume or change the temperature. Not only are the controls accessible without having to move my hands or look at the touchscreen, but the steering wheel acts as an anchor for my hand, taking road vibration out of the equation.

Maybe there is an analogue to Fitt's Law for touch screens interfaces in cars or other places where your body is being jostled or in motion. What you'd like is maximum forgiveness in the UI in such cases because it's hard to accurately hit a specific spot on the screen.

Matthaeus Krenn recently published a proposal for touch screen car interfaces that takes this idea to the logical extreme. You can read about it and watch a video demo as well. Essentially Krenn transforms the entire touchscreen in the Tesla into one single control with maximum forgiveness for your fingers to be jostled horizontally since only the vertical movement of your hand matters. By using the entire screen and spreading the input across a larger vertical distance, you can have a much larger margin of error to get the desired change. Krenn also tracks the number of fingers on the screen to allow access to different settings.

This is an interesting proposal, but for some of the most accessed functions of the car, controls on the steering wheel are still superior. The left scrollwheel on the Model S steering wheel is more convenient for changing the volume of the stereo and toggling between play and stop (you can press the scrollwheel) than the touchscreen. The right scrollwheel is more convenient for changing the car temperature and turning the climate control on and off than the touchscreen. Both scrollwheels allow you to keep both hands on the steering wheel rather than having to take the right hand off to access the touchscreen.

Actually, the ideal solution to almost all of these problems is a combination of the steering wheel controls and another interface that already exists in the car: voice. The ideal car interface from a safety standpoint would allow you to keep your eyes on the road and both hands on the steering wheel at all times. The scrollwheel and steering wheel buttons and voice commands satisfy both conditions.

In the Model S, to issue a voice command, you press and hold the upper right button on the steering wheel and issue your voice command, after which there is a delay while the car processes your command.

Unfortunately, for now, the number of voice commands available in the Tesla Model S are quite limited:

  • Navigation — you can say "navigate to" or "drive to" or "where is" followed by an address or destination
  • Audio — you can say "play" or "listen to" and then say an artist name or song title and artist name and it will try to set up the right playlist or find the specific track using Slacker Radio (one of the bundled audio services for Model S's sold in the U.S.)
  • Phone — if you connect a phone via Bluetooth, you can say "call" or "dial" followed by the name of a contact in your phone contact book

I'm not sure why the command list is so limited. When I first got the car I tried saying things like "Open the sunroof" or "Turn on the air conditioning" to no avail.

Perhaps the hardware/software for voice processing in the car aren't powerful enough to handle more sophisticated commands? Perhaps, though it seems like voice commands are sent to the cloud for processing which should enable more sophisticated voice processing when you have cellular connectivity. Or perhaps the car can offload voice processing to select cell phones with more onboard computing power.

In time, I hope more and more controls are accessible by voice. I'd love to have voice controls passed through to my phone via Bluetooth, too. For example, I'd love to ask my phone to play my voicemails through the car's audio system, or read my latest text message. For safety reasons, it's better not to fiddle with any controls while driving, analog or touchscreen-based.

Perhaps this is a problem with a closing window given the possibility of self-driving cars in the future, but that is still a technology whose arrival date is uncertain. In the meantime, with more and more companies like Apple and Google moving into the car operating system space, I hope voice controls are given greater emphasis as a primary mode of interaction between driver and car.

* One other thing I'd love to see in a refresh of the software would be less 3D in the digital instrument cluster above the steering wheel. I have some usability concerns with the currently vogue flat interfaces in mobile phone UI's, but the digital instrument cluster in a car is not meant to be touched, and the strange lighting reflection and shadow effects used there in the Tesla feel oddly old-fashioned. It's one interface where flat design seems more befitting such a modern marvel.