Uber for disembodied companionship

Kashmir Hill worked as an invisible boyfriend/girlfriend for a month. We are already living with one foot in the near future.

With each job, I would see the person’s first name, last initial and hometown; “how we met;” and my own assigned name, age, and which of six personality types they’d given their Invisible. Now I’m adventurous and fun. Now I’m cheerful and outgoing.
 
There were 3 major rules:
1. I was always supposed to be upbeat in my messages.
2. I’m not supposed to break character.
3. No sexting. (Photos are blocked on the service.)
 
I’d get the story of how we met and the last 10 messages we’d exchanged. This setup is designed to create the illusion of continuity; ideally, an Invisible Boyfriend would seem like a steady, stable presence in a user’s life, instead of what it really is: a rotating cast of men and women. And it is both: a woman who works for the service previously told me she prefers playing the role of boyfriend because she knows what a woman wants to hear.
 

Hill probably got paid more for this article than she did for her work as invisible companion.

It’s hard to put a price on love. But Crowdsource did. It’s worth a whopping five cents. That’s how much I got paid to write each of these texts.
 
If I spent an hour answering texts, and took the full five minutes to write each one, I’d be making 60 cents an hour, far below the minimum wage. This is legal because all the workers on the platform are classified as independent contractors rather than employees. “Contributors have a tremendous amount of control over their decisions—for example, when to perform a task, when to complete it, and even if they want to complete it at all,” said Jeffrey H. Newhouse, an employment lawyer at Hirschler Fleischer, by email. “That means the contributor isn’t an employee and, as a result, employee protections like the minimum wage don’t apply.”
 

Not surprising considering the required skill set is the ability to write with decent grammar, that's reasonably commodified.

As with Uber, the laborers already fear displacement by technology like self-driving cars.

I assumed that, when artificial intelligence is good enough, Invisible would just cut the crowdsourced humans out of the equation and use chat bots, which you don’t have to pay per message, instead.
 
No, he said. “Having humans in the flow is the key to the service,” said Tabor. “There are things that only humans can respond to and understand, like inside jokes.”
 

Loneliness continues to be one of the great problems being tackled by technologists. See also: Her, The Diamond Age.

IoT

Daniel Miessler thinks we're underestimating the Internet of Things.

IoT isn’t about smart gadgets or connecting more things to the Internet. It’s about continuous two-way interaction between everything in the world. It’s about changing how humans and other objects interact with the world around them.
 
It will turn people and objects from static to dynamic, and make them machine-readable and fully interactive entities. Algorithms will continuously optimize the interactions between everyone and everything in the world, and make it so that the environment around humans constantly adjusts based on presence, preference, and desire.
 
The Internet of Things is not an Internet use case. Quite the opposite, the IoT represents the ultimate platform for human interaction with the physical world, and it will turn the Internet into a mere medium.
 

Great succinct read on the key technical components that will make up his vision for IoT.

The Internet of Things is a terrible name, which doesn't help matters. Miessler suggests four alternatives though they don't catch me on first read. Something less tech jargony, not terrifying (the phonetics of daemon aren't great though it's a cool word), shorter (a la the singularity). it sounds silly but a name is all that gives a futuristic scenario like this a personality right now.

Light

The smartphone is the dominant camera in the world now, combining best in class portability with good enough quality. I still own an SLR, though, because for some special situations, the superior lens selection and larger sensor is worth its larger form factor (among other qualities).

Light is a company with a novel approach to bringing smartphone cameras up to SLR quality.

Rather than hewing to this one-to-one ratio, Light aims to put a bunch of small lenses, each paired with its own image sensor, into smartphones and other gadgets. They’ll fire simultaneously when you take a photo, and software will automatically combine the images. This way, Light believes, it can fit the quality and zoom of a bulky, expensive DSLR camera into much smaller, cheaper packages—even phones.
 
Light is still in the early stages, as it doesn’t yet have a prototype of a full product completed. For now it just has camera modules whose pictures can be combined with its software. But the startup says it expects the first Light cameras, with 52-megapixel resolution, to appear in smartphones in 2016.
 

Artist's rendering of what a Light camera array on a smartphone might look like.

I've been curious to see how smartphones continue to improve camera quality. This seems like one credible vector.

Be the side(s) in your multi-sided market

Anyone who's tried to start a multi-sided market business (the most common being a two-sided marketplace, like eBay, with buyers and sellers) knows one of the first major challenges is the chicken-and-egg problem. In order to attract sellers, you need buyers, but in order to attract buyers, you need sellers. On a dating service, the two sides are obvious. And so on.

How to get out of this infinite loop of emptiness? Kickstart things by being one or more of the sides in your multi-sided market. In other words, fake it til you make it.

Seeding and Weeding
 
Dating services simulate initial traction by creating fake profiles and conversations. Users who come to the site see some activity and are incentivized to stay on. Marketplace sites may also show fake activity to attract buyers and sellers.
 
Seeding Demand
 
In the book, Paypal Wars, Eric M. Jackson talks about how PayPal grew a base of sellers who accepted PayPal by creating demand for the service among buyers. When Paypal figured that eBay was their key distribution platform, they came up with an ingenious plan to simulate demand. They created a bot that bought goods on eBay and then, insisted on paying for it using PayPal. Not only did sellers come to know about the service, they rushed onto it as it already seemed to be getting popular. The fact that it was way better than every other payment mechanism on eBay only helped repeated usage.
 

Other great war stories and tips within.

The end of routine work

In recessions of the 1960s and 1970s, routine jobs would fall during the recession but quickly snap back. But after the recession in 1990, something changed. Routine jobs fell and, as a share of the population, never recovered. In the recessions in 2001 and in 2007-09 they fell even further. The snapback never occurred, suggesting that many firms began coping with recessions by scrapping tasks that could be automated or more easily outsourced.
 
For his part, Mr. Siu thinks jobs have been taken away by automation, more than by outsourcing. While some manufacturing jobs have clearly gone overseas, “it’s hard to offshore a secretary.” These tasks more likely became unnecessary due to improving technology, he said.
 
In the late 1980s, routine cognitive jobs were held by about 17% of the population and routine manual jobs by about 16%. Today, that’s declined to about 13.5% and 12%. (The figures are not seasonally adjusted and so are displayed in the chart as 12-month moving averages, to remove seasonal fluctuations).

...

But they are not among the labor market’s pessimists who fear that robots will render humans obsolete. Their work shows the economy has continued to generate jobs, but with a focus on nonroutine work, especially cognitive. Since the late 1980s, such occupations have added more than 22 million workers.
 

By Josh Zumbrun. The idea that U.S. unemployment has jumped to a higher plateau because of jobs moving overseas or because they're replaced by technology is not a new one, but it's useful to see data and charts to support the claim.

Brad Delong comments:

Note that these jobs are “routine” only in the sense that they involve using the human brain as a cybernetic control processor in a manner that was outside the capability of automatic physical machinery or software until a generation ago. In the words of Adam Smith (who probably garbled the story):
 
In the first fire-engines, a boy was constantly employed to open and shut alternately the communication between the boiler and the cylinder, according as the piston either ascended or descended. One of those boys, who loved to play with his companions, observed that, by tying a string from the handle of the valve which opened this communication to another part of the machine, the valve would open and shut without his assistance, and leave him at liberty to divert himself with his playfellows. One of the greatest improvements that has been made upon this machine, since it was first invented, was in this manner the discovery of a boy who wanted to save his own labour…
 
And Siu and Jaimovich seem to have gotten the classification wrong: A home-appliance repair technician is not doing a routine job–those jobs are disappearing precisely because they are not routine, require considerable expertise, are hence expensive, and so swirly swapping out the defective appliance for a new one is becoming more and more attractive.
 

As any economist would prescribe, it's become more critical when thinking about one's career and education to focus on humans' comparative advantage versus computers. But I also recommend people focus on their own unique comparative advantage: any intelligent person can do many things, but what can you do better than most anyone else? In winner-take-all markets, more common in this third industrial revolution, it's ideal to give yourself the best chance to be one of those winners, and consider it a bonus that those areas often overlap with one's personal passions (whatever you think of the 10,000 hour rule, most would agree it's easier to sustain through so many hours if one is more emotionally invested).

It's also best to accept that one will have to learn new skills many times in one lifetime. It used to be that once you finished college, the education phase of life was considered over. This is already obsolete for many. Even most programmers, supposedly the most insulated workers from technological job obsolescence, have to learn new programming languages or technologies on the job every few years now.

In the future, education will generally be accepted to mean a lifelong process. Continuing education will be the default. An undergraduate degree will simply be a first milestone in signaling one's skills and sociability to potential employers. More time will have to be set aside to level up, and resources and services to support this lifelong education continue to proliferate. I view the need for lifelong learning as a positive. Who was it that said that you're young at heart to the degree that what you want to learn exceeds what you already know?

Seriously, who said that? I don't know. Add that to my list of things to learn.

Payments as social network

Generally speaking, Venmo peaks around 7 pm on the east coast of the US and stays fairly strong until about 3am, which is midnight on the west coast. The timing—and corresponding emoji—suggests the preponderance of Venmo transactions are people paying each other back for dinner and drinks.
 
Emoji use is markedly different at quieter times of the day. For example, the house emoji ranks highly from 7 am to 3pm EST. And the car and taxi emoji crack the top five between 5am and 8am EST.
 
...
 
Some of these data are skewed by messages containing a lot of emoji. The “pile of poop” emoji only holds the top spot in the midnight hour because a single payment used it 1,116 times.
 

On the usage patterns for the payment service Venmo, as interpreted by the emoji used to tag public transactions.

I like to think I'm young at heart, but the fact that Venmo is also a social network makes me feel old. Payment transactions are one form of community interaction, sure, but looking through my Venmo feed feels like peering through a fog of data pollution. Perhaps the next generation of kids really does live with their default life privacy settings toggled to public.

Speaking of the pile of poop emoji, it seems only a matter of time until someone releases an app that allows you to broadcast when you are taking a poop. It should be a mobile app just called Poop. I leave it to the design geniuses at Apple to figure out what type of haptic feedback a poop notification should emit on the Apple Watch.

Skipping the web

Why India's Flipkart abandoned its mobile website:

Now, if you tack on a gigantic population with miserable internet connection speeds, the prospect of scaling up your website operations and back end to deal with not only the overload on it, but also the abysmal experience on the consumer end, whether it is mobile or desktop, is even more bleak. An app allows a user to stay logged in while updates and other information are efficiently and constantly downloaded, ready for consumption almost instantly. It is, in fact, perfect for low-bandwidth situations.
 

People often write of places like India or Africa bypassing landlines or PCs to skip ahead to technologies like wireless or smartphones, but I haven't heard of countries treating the web as one of those intermediate technologies to be hopped over.

Having spent lots of time working out of China, I see the sense in it. Internet connection speeds are really slow there, and loading the web can be painful. Even with an upgraded pipe into the building, when I worked out of Hulu's Beijing office, I found myself browsing the web a lot less simply out of impatience.

Having grown up in the U.S., the web was one of the first and still longest-running touchpoint to the internet. My first was using newsgroups in college, and the web came about towards the end of my undergrad days. I can understand why so many in the U.S. are nostalgic and defensive of the web as a medium. Seeing so much content and online interaction move behind the walls of social networks seems like an epic tragedy to many, and I empathize.

Many people in India, China, and other parts of the world, where bandwidth is low and slow, and where mobile phones are their one and only computer, have no room for such sentimentality. They may never have experienced the same heyday of the web, so they feel no analogous nostalgia for it as a medium. Path dependence matters here, as it does in lots of areas of tech, and one of the best ways to detect it is to widen your geographic scope of study outside the U.S. Asia is a wonderful comparison group, especially for me because I have so many friends and relatives there and because I still interact with them online at a decent frequency.

In the U.S., many tech companies were lauded as pioneers for going mobile first when in Asia companies are already going mobile only. In some ways, Asia feels like it lives in the past as compared to the U.S., especially when one sees so many fast followers of successful U.S. technology companies, but in a surprisingly large number of ways, Asia lives in our near future.

Fiction's willful ignorance of tech

“And I just don’t know how to write a novel in which the characters can get in touch with all the other characters at any moment. I don’t know how to write a novel in the world of cellphones. I don’t know how to write a novel in the world of Google, in which all factual information is available to all characters. So I have to stand on my head to contrive a plot in which the characters lose their cellphone and are separated from technology.”
 

Ann Patchett on one of the chief problems of novel-writing today. The internet has disrupted lots of things, and one of those is dramatic information asymmetry

Steve Himmer on one popular solution for fiction writers:

In literary fiction, the more popular solution seems to be relying on settings close to the present, but far enough back to avoid such inconvenience. Granted, the popularity of the 1970s, 1980s, and early-1990s as settings also owes plenty to generational shifts in literary production as people write about formative periods and the years they remember. But it also avoids any number of narrative problems and allows writers to go on telling stories in the way they are used to, rather than incorporating the present in ways that are difficult and disruptive. When I recently wondered on Twitter — one of those very disruptions — if we’ve reached the point of needing a term for this kind of setting, author Jared Yates Sexton suggested “the nostalgic present.” And while it’s easy enough to incorporate mention of that into this essay, where might a tweet fit into a novel? As dialogue, formatted like any other character’s utterance? Or embedded with timestamp and retweet count and all? What happens when our characters spend half their novel on Twitter, as so many of us spend our workdays? It’s a hard question, but not one that gets answered when writers aspire to be more like Andras Schiff than Lukas Kmit.
 

Himmer goes on to urge more writers to embrace the technology of today and incorporate it into their fiction, and I'm with him. It strikes me as lazy screenwriting when they incorporate one of those old school answering machines just so the audience can hear what message is being left. I haven't seen one of those in ages. If it's meant to be a message missed, show us the person as they leave the message so we can hear it. If we're meant to see the recipient read as they hear the voicemail, give us the audio as the person listens to the voice message.

Younger fiction writers, in particular, have a great opportunity here. Embrace the massive role that all this technology plays in our lives and teach readers about how it is affecting us and how we might cope. Novels are supposed to, among other things, illuminate the human condition. No one using all this technology should feel any less human, but the absence of technology in art leaves an odd temporal void in which we can only travel to the past or the future or, as Sexton suggests in the passage above, the nostalgic present.