How Apple deploys its cash

It's often said that Apple is sitting on too large a horde of cash, that if it can't come up with ways to deploy that cash that exceed its own internal rates of return or other such hurdles (to use finance speak), it should return that cash to shareholders.

On Quora, an anonymous account posted an interesting take on why Apple accumulates so much cash and how it deploys that as a strategic weapon.

When new component technologies (touchscreens, chips, LED displays) first come out, they are very expensive to produce, and building a factory that can produce them in mass quantities is even more expensive.  Oftentimes, the upfront capital expenditure can be so huge and the margins are small enough (and shrink over time as the component is rapidly commoditized) that the companies who would build these factories cannot raise sufficient investment capital to cover the costs.

What Apple does is use its cash hoard to pay for the construction cost (or a significant fraction of it) of the factory in exchange for exclusive rights to the output production of the factory for a set period of time (maybe 6 - 36 months), and then for a discounted rate afterwards.  This yields two advantages:

  1. Apple has access to new component technology months or years before its rivals.  This allows it to release groundbreaking products that are actually impossible to duplicate.  Remember how for up to a year or so after the introduction of the iPhone, none of the would-be iPhone clones could even get a capacitive touchscreen to work as well as the iPhone's?  It wasn't just the software - Apple simply has access to new components earlier, before anyone else in the world can gain access to it in mass quantities to make a consumer device.  One extraordinary example of this is the aluminum machining technology used to make Apple's laptops - this remains a trade secret that Apple continues to have exclusive access to and allows them to make laptops with (for now) unsurpassed strength and lightness.
  2. Eventually its competitors catch up in component production technology, but by then Apple has their arrangement in place whereby it can source those parts at a lower cost due to the discounted rate they have negotiated with the (now) most-experienced and skilled provider of those parts - who has probably also brought his production costs down too.  This discount is also potentially subsidized by its competitors buying those same parts from that provider - the part is now commoditized so the factory is allowed to produce them for all buyers, but Apple gets special pricing.

For me this recalls sushi restaurants. When I first graduated college and finally had enough money to eat sushi regularly, I'd sit at the bar at a sushi restaurant and watch the sushi chef cutting the fish and wonder what was so difficult about what they were doing. After watching a few of them, I felt like I could climb over the counter and assemble a reasonably good piece of sushi myself.

After watching Jiro Dreams of Sushi, I realized that watching a chef prepare a piece of sushi was just the tip of the iceberg, that much of what set one sushi restaurant apart from another was the supply chain, the relationships with the right buyers and suppliers and fishmongers that helped secure the best ingredients.

This strikes me as the same way most people underestimate Apple. They see the aesthetics of the final product, the software or hardware design of an iPhone or a MacBook air, and they don't see any sustainable competitive advantage. All of that can be copied, they think.

Leaving aside the fact that in hardware design if you have to copy someone else in technology you're already one generation behind, what people often fail to see (or can't, given Apple's secrecy) is the massive supply chain edifice below the water's surface. Scaling in software may be less of a problem for David than it once was, but in hardware it pays to be Goliath. 

Tech tidbits

  1. Researchers have developed a type of chemical iris that could enable photographers to select apertures on really tiny cameras (think camera phones) in the future. Maybe someday we'll get to shoot wider open on camera phones, enabling us to get the type of shallow depth of field which is the one piece of a photographer's toolbox that's most noticeably absent on the most popular camera now, the smartphone.
  2. Netflix signs Chelsea Handler for a new talk show. They're going to keep re-investing their profits in original programming. Imagine if HBO didn't have one particular house style for original programming but instead tried to target programs even more segments of viewers. What number of subscribers could they sign up? That's what Netflix is setting out to do.
  3. Viacom and about 60 small cable operators representing about 900,000 households went to war over carriage fees. In what is a notable first, the two sides decided to part ways rather than settle. Supposedly most households didn't care and the cable operators lost less than 2% of subscribers, much lower than the 10% churn they were bracing for. Not entirely surprising consider Viacom's target viewership is less represented in the flyover states, and most of the generation that watches Viacom programming can likely find that stuff online. This next generation of kids have never paid for cable and probably never will. There remains just one channel that every cable operator in the country would have to suck it up and pay for just about any price, and that's ESPN, which not surprisingly demands the highest carriage fee (by a wide wide margin) in your cable channel lineup.
  4. The Oxford Mail is experimenting with letting WhatsApp users follow them to get occasional news alerts tailored to their interests. This is a small test but an important one as it may signal WhatsApp is finally moving to become a platform like its Asian peers LINE, WeChat, and KakaoTalk. Those chat services have corporate or celebrity accounts you can follow to receive broadcast text messages, much like following a celebrity or corporate account on Twitter.

The Nash Equilibrium of Silicon Valley

A "Nash equilibrium" solution is one where, when both parties have considered all available moves, minimizes damage to themselves under the assumption the other player is a selfish dickhead. In a capitalist environment like America's, with no social controls or other factors besides ruthless logic, this is the default behavior. And it should be, if games are strictly competitive and humans are built like computers.

A "Pareto optimal" solution is one which, given all the possibilities of action, produces the best outcome for both parties (with some negotiable surplus).

The Prisoner's Dilemma demonstrates that a Nash equilibrium solution is not always Pareto optimal. The "confess-confess" solution is the Nash equilibrium. You will always be better off screwing the other person over, whether they are honest or dishonest. The "deny-deny" solution is Pareto optimal. If both parties can somehow trust each other, they will both be better off selecting this solution.

...
When someone like Elon Musk comes along, someone who is clearly is working very hard toward Pareto optimal outcomes (watch or read about his personal history), we simply cannot fathom that his actions can't be explained outside a traditional Nash-equilibrium, dog-eat-dog model of capitalism.

Of course, this applies way beyond Tesla. I believe the current skepticism around Silicon Valley's "Make The World a Better Place" mentality is deeply rooted in historical anxiety about institutional capitalism. I don't think this anxiety is misplaced. Rather, I think that technology, specifically the World Wide Web presents a "way out" of this dilemma. It will take time, but ultimately, the Web's power is that it can mimic the "accountability" aspect of local transactions, but on a global scale.
 

From Chris Johnson, whose blog I just stumbled across for the first time.

The unfortunate part of the Game of Thrones between the tech titans in Silicon Valley (Amazon, Apple, Google, Facebook, Twitter) is that the industry as a whole has tended towards Nash Equilibriums. 

In general, healthy competition is good for consumers, and I'm of that camp, especially when there is one dominant entity. In many areas, though, we have companies devoting precious resources to building out a near clone of another's company's services just for defensive purposes. It's not just an issue with patents. Is it critical that we have yet another streaming music service? Another mapping app for mobile devices? Is it good for consumers when services for one company are kept off another company's hardware/software ecosystem just for competitive reasons? So many of our best and brightest are building repeated work because of tribal (company) affiliation.

The instant-on computer

A long time ago, when I was at Amazon, someone asked Jeff Bezos during an employee meeting what he thought would be the single thing that would most transform Amazon's business.

Bezos replied, "An instant-on computer." He went on to explain that he meant a computer that when you hit a button would instantly be ready to use. Desktops and laptops in those days, and still even today, had a really long bootup process. Even when I try to wake my Macbook Pro from sleep, the delay is bothersome.

Bezos imagined that people with computers which were on with the snap of a finger would cause people to use them more frequently, and the more people were online, the more they'd shop from Amazon. It's like the oft-cited Google strategy of just getting more people online since it's likely they'd run across an ad from Google somewhere given its vast reach.

We now live in that age, though it's not the desktops and laptops but our tablets and smart phones that are the instant-on computers. Whether it's transformed Amazon's business, I can't say; they have plenty going for them. But it's certainly changed our usage of computers generally. I only ever turn off my iPad or iPhone if something has gone wrong and I need to reboot them or if I'm low on battery power and need to speed up recharging.

In this next age, anything that cannot turn on instantly and isn't connected to the internet at all times will feel deficient.

How to become a speed reader, updated

Spritzing presents reading content with the ORP located at the specific place where you’re already looking, allowing you to read without having to move your eyes. With this approach, reading becomes more efficient because Spritzing increases the time your brain spends processing content without having to waste time searching for the next word’s ORP. Spritzing also enhances reading on small screens. Because the human eye can focus on about 13 characters at a time, Spritzing requires only 13 characters’ worth of space inside our redicle. No other reading method is designed to help you read all of your content when you’re away from a large screen. But don’t take our word. The following video compares traditional reading to Spritz and is a real eye-opener when it comes to the efficiencies that are gained by placing words exactly where your brain wants them to be located.
 

More here from Spritz Inc. on their speed reading technology. It's worth looking at a demo of the Spritz speed reading aid in action in this article. By placing each word of the text you're reading in a position so that the key letter of each word is located at the same point, your eye doesn't have to move across words on a page. It turns out that eye movement in traditional reading is inefficient. Allowing your eye to stay fixated in one spot increases your reading throughput (though it sounds lazy; don't make my eye have to move even a few millimeters, it's so taxing!).

I took a speed reading course when I was in 6th grade, I was taught that the key to speed reading was to consume blocks of words at a time and to stop yourself from subvocalizing (that is, sounding out the words silently in your head as you read). You can try a number of tricks to cure yourself of that habit, one is to hum to yourself while reading. That blocks your ability to subvocalize.

Spritz's approach to speed reading is a bit different. Rather than scanning groups of words at a time, you're reading one word at a time. I can't imagine reading that way, but everything new seems odd, and every time I find myself rejecting the new I feel like Grandpa Simpson so I'm curious to try this out.

UPDATED: Professor John Henderson is skeptical of Spritz's claims.

So Spritz sounds great, and even somewhat scientific. But can you really read a novel in 90 minutes with full comprehension? Well, like most things that seem too good to be true, the answer unfortunately is no. The research in the 1970s showed convincingly that although people can read using RSVP at normal reading rates, comprehension and memory for text falls as RSVP speeds increase, and the problem gets worse for paragraphs compared to single sentences. One of the biggest problems is that there just isn’t enough time to put the meaning together and store it in memory (what psychologists call “consolidation”). The purported breakthrough use of the “ORP” doesn’t really help with this, and isn’t even novel. In the typical RSVP method, words are presented centered at fixation. The “slightly left of fixation” ORP used by Spritz is a minor tweak at best.

Two other points are worth noting. One is that reading at fast RSVP rates is tiring. It requires unwavering attention and vigilance. You can’t let your mind wander, ponder the nuances of what you’re reading, make a mental note to check on a related idea, or do any other mental activity that would normally be associated with reading for comprehension. If you try, you’ll miss some of the text that is relentlessly flying at you. The second point is that the difficulty of comprehension during reading changes over the course of a sentence, paragraph, and page. Our eyes engage in a choreographed dance through text that reflects this variation in the service of comprehension. RSVP makes every step in the dance the same. Or, to stretch an analogy, imagine hiking along a forest trail. Each step you take determines your overall hiking speed. Some steps require a longer pause to gain footing on loose stones, and others require a longer stride to step over a protruding root. Would it be effective to run on the trail? Worse, would it be a good idea to tie a piece of rope between your ankles so that each step was constrained to be exactly the same length? Surely this would lead to some stumbling, if not to a twisted ankle or catastrophic fall!

People no longer have to buy computers that overserve

A Mac or PC is a superior experience for traditional computing activities, at least according to traditional measurements like speed or efficiency, but an iPad is simpler and more approachable, and it does other things as well.

(This, of course, is why Macs aren’t going away. In fact, as Phil Schiller noted at the end of this great Macworld piece marking the Mac’s 30-year anniversary, the iPad has freed the Mac to focus even more on power users going forward.)

Ultimately, it is the iPad that is in fact general purpose. It does lots of things in an approachable way, albeit not as well as something that is built specifically for the task at hand. The Mac or PC, on the other hand, is a specialized device, best compared to the grand piano in the living room:2 unrivaled in the hands of a master, and increasingly ignored by everyone else.

So writes Ben Thompson in The General-Purpose iPad and the Specialist Mac. I agree. For a long time, one of the debates was whether an iPad was just a consumption device. While I think it's silly to argue that you can't create on your iPad, I do largely use it for consumption purposes. I'd much rather do many things on my desktop or laptop than my iPad: write, build spreadsheets, wireframe, create presentations, edit video.

But there are plenty of activities which the iPad and iPhone are far better devices for the job because they are portable, light, sensitive to touch, and, not to be underestimated, always on (while I leave my laptop on most of the time, it still takes longer to wake it up and get it going than my iPad or iPhone). Browsing web pages. Reading books. Reading my email, Twitter, Instagram, Facebook. Messaging.

For some activities, the interaction method of finger on screen is both more intimate and simpler. For example, dragging my finger across the screen to adjust brightness of photos in Snapseed is more pleasurable than taking my mouse and finding a tiny slider handle with my cursor and then moving it in tiny increments. Double tapping and having mobile Safari zoom a column of content on the web is wonderful, I wish I could do that on my laptop.

It's clear that for many years, my desktop and laptop have been too much computer for many jobs. For many people, all they needed a desktop or laptop for was reading email, surfing the web, listening to music, or watching streaming video. For those tasks, a desktop or laptop computer overserved their needs, but those were the only types of computers we had so we used it as such.

Now that the world has more choices in computing devices for the job, many are choosing a tool that doesn't overserve, and that is more often than not an iPad or smartphone. For the average household, those are much cheaper to purchase than a laptop or desktop.

I still love sitting down in front of a giant monitor hooked up to my old Mac Pro in my office at home, but the sales figures don't lie. That's now the minority.

Some serious pivots

Startups in Silicon Valley get plaudits for pivoting, but a company that has had to make some real pivots with a capital P across many decades is none other than mobile phone goliath Samsung.

I had dinner tonight with a friend whose grandfather was one of several people brought to Samsung to help them make their first entry into technology hardware. At its founding in 1938, though, Samsung was a simple trading company that dealt in local produce. Later it shifted to processing sugar cane, then it moved into textiles. That was the first in a long line of transformations in its evolution from small family business to global conglomerate. From making your own noodles to making your own smartphones, that is survival and adaptation of the highest form.

There aren't many U.S. tech companies that have even been around that long, let alone having evolved so drastically. Off the top of my head, IBM and Xerox are the only two tech companies I can think of that were founded in the U.S. prior to Samsung in 1938 and that still exist. I'm going to venture that neither of those began as noodle makers.

DNA for data storage

Scientists were able to store 739 kB of data in DNA.

The study reported that the institute's team had stored all 154 Shakespeare sonnets, a photo, a PDF of a scientific paper, and a 26-second sound clip from US civil rights leader Martin Luther King Jnr's "I Have a Dream" speech in a barely visible bit of DNA in a test tube.

"We downloaded the files from the web and used them to synthesise hundreds of thousands of pieces of DNA. The result looks like a tiny piece of dust," said Emily Leproust of Agilent, a biotech company that took the digital data and used it to synthesise molecules of DNA in a laboratory in the United States.

Agilent then mailed the sample across the Atlantic to the EBI, where the researchers soaked it in water to reconstitute it and used standard sequencing machines to unravel the code. They recovered and read the files with 100 per cent accuracy. "It's also incredibly small, dense and does not need any power for storage, so shipping and keeping it is easy," Goldman added.

Not great for retrieval given the high cost of synthesizing DNA, but as long-term backup, really robust.

The data stored in the test amounted to only 739 kilobytes, but the technique could be scaled up to store the three zettabytes, or 3,000 billion billion bytes, of stored data estimated to exist on earth, and the only limitation to wide implementation is the high cost of synthesising DNA, the researchers said. The world's data would theoretically fit in one hand and could be stored safely for many centuries, they said.

It feels like there's a sci-fi novel in this somewhere.