Evaluating mobile map designs

I saw a few links to this recent comparison by Justin O'Beirne of the designs of Apple Maps vs. Google Maps. In it was a link to previous comparisons he made about a year ago. If you're into maps and design, it's a fairly quick read with a lot of useful time series screenshots from both applications to serve as reference points for those who don't open both apps regularly.

However, the entire evaluation seems to come from a perspective at odds with how the apps are actually used. O'Beirne's focus is on evaluating these applications from a cartographic standpoint, almost as if they're successors to old wall-hanging maps or giant road atlases like the ones my dad used to plot out our family road trips when we weren't wealthy enough to fly around the U.S. 

The entire analysis is of how the maps look when the user hasn't entered any destination to navigate to (what I'll just refer to as the default map mode). Since most people use these apps as real-time navigation aids, especially while driving, the views O'Beirne dissects feel like edge cases (that's my hypothesis, of course; if someone out there who has actual data on % of time these apps are used for navigation versus not, I'd love to hear it, even if it's just directional to help frame the magnitude).

For example, much of O'Beirne's ink is spent on each application's road labels, often at really zoomed out levels of the map. I can't remember the last time I looked at any mobile mapping app at the eighth level of zoom, I've probably only spent a few minutes of my life in total in all of these apps at that level of the geographic hierarchy, and only to answer a trivia question or when visiting some region of the world on vacation.

What would be of greater utility to me, and what I've yet to find, is a design comparison of all the major mapping apps as navigation aids, a dissection of the UX in what I'll call their navigation modes. Such an analysis would be even more useful if it included Waze, which doesn't have the market share of Apple or Google Maps but which is popular among a certain set of drivers for its unique approach to evaluating traffic, among other things.

Such a comparison should analyze the visual comprehensibility of each app in navigation mode, which is very different from their default map views. How are roads depicted, what landmarks are shown, how clear is the selected path when seen only in the occasional sidelong glance while driving, which is about as much visual engagement as a user can offer if operating a 3,500 pound vehicle. How does the app balance textual information with the visualization of the roads ahead, and what other POI's or real world objects are shown? Waze, for example, shows me other Waze users in different forms depending on how many miles they've driven in the app and which visual avatars they've chosen.

Of course, the quality of the actual route would be paramount. It's difficult for a single driver to do A/B comparisons, but I still hope that someday someone will start running regular tests in which different cars, equipped with multiple phones, each logged into different apps, try to navigate to the same destination simultaneously. Over time, at some level of scale, such comparison data would be more instructive than the small sample size of the occasional self-reported anecdote.

[In the future, when we have large fleets of self-driving cars, they may produce insights that only large sample sizes can validate, like UPS's "our drivers save time by never turning left." I'd love if Google Maps, Apple Maps, or Waze published some of what they've learned about driving given their massive data sets, a la OKCupid, but most of what they've published publicly leans towards marketing drivel.]

Any analysis of navigation apps should also consider the voice prompts: how often does the map speak to you, how far in advance of the next turn are you notified, how clear are the instructions? What's the signal to noise? What are the default wording choices? Syntax? What voice options are offered? Both male and female voices? What accents?

Ultimately, what matters is getting to your destination in the safest, most efficient manner, but understanding how the applications' interfaces, underlying data, and algorithms influence them would be of value to so many people who now rely on these apps every single day to get from point A to B. I'm looking for a Wirecutter-like battle of the navigation apps, may the best system win.

    The other explicit choice O'Beirne makes is noted in a footnote:

    We’re only looking at the default maps. (No personalization.)

    It is, of course, difficult to evaluate personalization of a mapping app since you can generally only see how each map is personalized for yourself. However, much of the value of Google Maps lies in its personalization, or what I suspect is personalization. Given where we are in the evolution of many products and services, analyzing them in their non-personalized states is to disregard their chief modality.

    When I use Google Maps in Manhattan, for example, I notice that that the only points of interest (POI's) the map shows me at various levels of zoom seem to be places I've searched for most frequently (this is in the logged in state, which is how I always use the app). Given Google's reputation for being a world leader in crunching large data sets, it would be surprising if they weren't selecting POI labels, even for non-personalized versions of their maps, based on what people tend to search for most frequently.

    In the old days, if you were making a map to be hung on the wall, or for a paper map or road atlas, what you chose as POI's would be fixed until the next edition of that map. You'd probably choose what felt like the most significant POI's based on reputation, ones that likely wouldn't be gone before the next update. Eiffel Tower? Sure. Some local coffee shop? Might be a Starbucks in three months, best leave that label off.

    Now, maps can be updated dynamically. There will always be those who find any level of personalization creepy, and some are, but I also find the lack of personalization to be immensely frustrating in some services. That I search for reservations in SF on Open Table and receive several hundred hits every time, sorted in who knows what order, instead of results that cluster my favorite or most frequently booked restaurants at the top, drives me batty.

    When driving, personalization is even more valuable because it's often inconvenient or impossible to type or interact with the device for safety reasons. It's a great time saver to have Waze guess where I'm headed automatically ("Are you driving to work?" it asks me every weekday morning), and someday I just want to be able to say "give me directions to my sister's" and have it know where I'm headed.

    My quick first person assessment, despite the small sample size caveats noted earlier:

    • I know that Apple Maps, as the default on iOS, has the market share lead on iPhone by a healthy margin. Still, I'll never get past the time the app took me off to a dead end while I was on the way to a wedding, and I've not used it since except to glance at the design. It may have the most visually pleasing navigation mode aesthetic, but I don't trust their directions at the tails. Some products are judged not on their mean outcome but their handling of the tails. For me, navigation is one of those.
    • It's not clear if Apple Maps should have a data edge over Google Maps and Waze (Google bought Waze but has kept the app separate). Most drivers use it on the iPhone because it's the default, but Google got a headstart in this space and also has a fleet of vehicles on the road taking Google street photos. Eventually, Google may augment that fleet with self-driving cars.
    • I trust Google Maps directions more than those of Apple Maps. However, I miss the usability of the first version of Google Maps, which came out on iOS way back with the first iPhone. I'd heard rumors Apple built that app for Google, but I'm not sure if that's true. The current flat design of Google Maps often strands me in a state in which I have no idea how to initiate navigation. I'd like to believe I'm a fairly sophisticated user and yet I sometimes sit there swiping and tapping in Google Maps like an idiot, trying to get it to start reading turn by turn directions. Drives me batty.

    I use Waze the most when driving in the Bay Area or wherever I trust that there are enough other drivers using Waze that it will offer the quickest route to my destination. That seems true in most major metropolitans. I can tell a lot of users in San Francisco use Waze because sometimes, when I have to drive home to the city from the Peninsula, I find myself in a line of cars exiting the highway and navigating through some random neighborhood side street, one that no one would visit unless guided by an algorithmic deity. 

    I use Waze with my phone mounted to one of those phone clamps that holds the phone at eye level above my dashboard because the default Tesla navigation map is still on Google Maps and is notoriously oblivious to traffic when selecting a route and estimating an arrival time. Since I use Waze more than any other navigation app, I have more specific critiques.

    • One reason I use Waze is that it seems the quickest to respond to temporary buildups of traffic. I suspect it's because the UI has a dedicated, always visible button for reporting such traffic. Since I'm almost always the driver, I have no idea how people are able to do such reporting, but either a lot of passengers are doing the work or lots of drivers able to do so while their car is stuck in gridlock. The other alternative, that drivers are filing such reports while their cars are in motion, is frightening.
    • I don't understand the other social networking aspects of Waze. They're an utter distraction. I'm not immune to the intrinsic rewards of gamification, but in the driving context, where I can't really do much more than glance at my phone, it's all just noise. I don't feel a connection to the other random Waze drivers I see from time to time in the app, all of which are depicted as various pastel-hued cartoon sperm. In wider views of the map, all the various car avatars just add a lot of visual noise.
    • I wish I could turn off some of the extraneous voice alerts, like "Car stopped on the side of the road ahead." I'm almost always listening to a podcast in the background when driving, and the constant interruptions annoy me. There's nothing I can do about a car on the side of the road, I wish I could customize which alerts I had to hear.
    • The ads that drop down and cover almost half the screen are not just annoying but dangerous as I have to glance over and then swipe them off the screen. That, in and of itself, is disqualifying. But beyond that, even while respecting the need for companies to make money, I can't imagine these ads generate a lot of revenue. I've never looked at one. If the ads are annoying, the occasional survey asking me which ads/brands I've seen on Waze are doubly so. With Google's deep pockets behind Waze, there must be a way to limit ads to those moments where they're safe or clearly requested, for example when a user is researching where to get gas or a bit to eat. When a driver has hands on the wheel and is guiding a giant mass of metal at high velocity, no cognitive resources should be diverted to remembering what brands you recall seeing on the app.
    • Waze still doesn't understand how to penalize unprotected left turns, which are almost completely unusable in Los Angeles at any volume of traffic. At rush hour it's a fatal failure, like being ambushed by a video game foe that can kill you with one shot with no advance warning. As long as it remains unfixed, I use Google Maps when in LA. I can understand why knowledge sharing between the two companies may be limited by geographic separation despite being part of the same umbrella company, but that the apps don't borrow more basic lessons from other seems a shame.
    • I use Bluetooth to listen to podcasts on Overcast when driving, and since I downloaded iOS 11, that connection has been very flaky. Also, if I don't have the podcast on and Waze gives me an voice cue, the podcast starts playing. I've tried quitting Overcast, and the podcast still starts playing every time Waze speaks to me. I had reached a good place in that Overcast would pause while Waze spoke so they wouldn't overlap, but since iOS 11 even that works inconsistently. This is just one of the bugs that iOS 11 has unleashed upon my phone, I really regret upgrading.

    I miss first-gen Google Maps for IOS

    This is an oldie, but still relevant: an informative deep dive into the design choices of Google Maps and Apple Maps on iOS.

    I wish I had screens from the first version of Google Maps that shipped on the iPhone, a version that was rumored to have been built by Apple for Google. To me, that's still the most usable mapping app ever for iOS, and all subsequent versions, including both of the latest versions of Google Maps and Apple Maps, are more complex. The new maps may do more and offer more functionality, but if you just wanted to quickly get directions to a particular place, nothing beat the first-gen Google Maps for iOS.

    Part of this is the result of the new flat design aesthetic, which is sleek but often opaque. In many ways, touchscreen user interfaces seem to have approached a local maximum in which the only innovation is coming up with new icons that users must learn. At some point, we're just substituting new abstractions and not making significant leaps forward in usability. More apps are  better, on average, than the first generation of mobile apps, but the best designed apps today don't feel much better than the best apps from the dawn of the iOS app store.

    These days, the great leap forward in interface design feels like it's the complete removal of the abstraction of traditional software design. The interface that feels closest to achieving that in the near future is text, most often found in some sort of messaging interface. Following on its heels, with even greater potential as a democratic UI medium, is voice.

    Visualizing open-ended travel via Google Flights

    While I'm on this break from working, I've been planning some travel, and my new favorite site for open-ended travel exploration is Google Flights. Just enter your starting airport and a start date (and optionally an end date), leave the destination blank, and Google Flights can return you a map of the world with lowest one-way or round trip ticket prices for any destination (I assume this is powered by data from their ITA acquisition).

    It looks like this (click it for a larger view).

    You can use a price filter slider and drag it down in price to reduce the number of destinations. Now they just need to add in hotel pricing for all-in open-ended travel budgeting.

    It's a luxury to be able to plan travel this way, but for those rare times when you can, this is a fun way to do it. I was using this just a few weeks ago and found a random discount fare to Taiwan, and now I'm onboard a flight there.

    Once we all are wearing virtual reality goggles we'll undoubtedly be able to spin a virtual globe with this visualization mapped on top of it. Though I suppose, at that point, perhaps we'll just travel places virtually, for much lower prices.

    Search zeitgeist, sex edition

    This piece analyzing data on how people use Google to search for topics related to sex is a fun tour of the submerged portion of the human psyche.

    Of particular interest to me was this:

    Women also show a great deal of insecurity about their behinds, although many women have recently flip-flopped on what it is they don’t like about them.

    In 2004, in some parts of the United States, the most common search regarding changing one’s butt was how to make it smaller. The desire to make one’s bottom bigger was overwhelmingly concentrated in areas with large black populations. Beginning in 2010, however, the desire for bigger butts grew in the rest of the United States. This interest has tripled in four years. In 2014, there were more searches asking how to make your butt bigger than smaller in every state. These days, for every five searches looking into breast implants in the United States, there is one looking into butt implants.

    Does women’s growing preference for a larger behind match men’s preferences? Interestingly, yes. “Big butt porn” searches, which also used to be concentrated in black communities, have recently shot up in popularity throughout the United States.

    In a recent post, I wondered what factors drove the frequent shifts in the ideal female body? Are women driving the change or are they reacting to male preferences? And what part of the change has roots in culture rather than evolution?

    More and more, I suspect the cultural impact to be significant. If female bodies were purely evolutionary signals of physical fitness, the ideal shouldn't change so frequently decade to decade.

    Moravec's Paradox and self-driving cars

    Moravec's Paradox:

    ...the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources. The principle was articulated by Hans MoravecRodney BrooksMarvin Minsky and others in the 1980s. As Moravec writes, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility."[1]

    Linguist and cognitive scientist Steven Pinker considers this the most significant discovery uncovered by AI researchers. In his book The Language Instinct, he writes:

    The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived... As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.

    I thought of Moravec's Paradox when reading two recent articles about Google's self-driving cars, both by Lee Gomes.

    The first:

    Among other unsolved problems, Google has yet to drive in snow, and Urmson says safety concerns preclude testing during heavy rains. Nor has it tackled big, open parking lots or multilevel garages. The car’s video cameras detect the color of a traffic light; Urmson said his team is still working to prevent them from being blinded when the sun is directly behind a light. Despite progress handling road crews, “I could construct a construction zone that could befuddle the car,” Urmson says.

    Pedestrians are detected simply as moving, column-shaped blurs of pixels—meaning, Urmson agrees, that the car wouldn’t be able to spot a police officer at the side of the road frantically waving for traffic to stop.

    The car’s sensors can’t tell if a road obstacle is a rock or a crumpled piece of paper, so the car will try to drive around either. Urmson also says the car can’t detect potholes or spot an uncovered manhole if it isn’t coned off.

    More within on some of the engineering challenges still unsolved.

    In his second piece, at Slate, Gomes notes something about self-driving cars that people often misunderstand about how they work (emphasis mine):

    ...the Google car was able to do so much more than its predecessors in large part because the company had the resources to do something no other robotic car research project ever could: develop an ingenious but extremely expensive mapping system. These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway.

    That might not seem like such a tough job for the company that gave us Google Earth and Google Maps. But the maps necessary for the Google car are an order of magnitude more complicated. In fact, when I first wrote about the car for MIT Technology Review, Google admitted to me that the process it currently uses to make the maps are too inefficient to work in the country as a whole.

    To create them, a dedicated vehicle outfitted with a bank of sensors first makes repeated passes scanning the roadway to be mapped. The data is then downloaded, with every square foot of the landscape pored over by both humans and computers to make sure that all-important real-world objects have been captured. This complete map gets loaded into the car's memory before a journey, and because it knows from the map about the location of many stationary objects, its computer—essentially a generic PC running Ubuntu Linux—can devote more of its energies to tracking moving objects, like other cars.

    But the maps have problems, starting with the fact that the car can’t travel a single inch without one. Since maps are one of the engineering foundations of the Google car, before the company's vision for ubiquitous self-driving cars can be realized, all 4 million miles of U.S. public roads will be need to be mapped, plus driveways, off-road trails, and everywhere else you'd ever want to take the car. So far, only a few thousand miles of road have gotten the treatment, most of them around the company's headquarters in Mountain View, California.  The company frequently says that its car has driven more than 700,000 miles safely, but those are the same few thousand mapped miles, driven over and over again.

    The common conception of self-driving cars is that they drive somewhat like humans do. That is, they look around at the road and make decisions on what their camera eyes “see.” I didn't realize that the cars were depending so heavily on pre-loaded maps.

    I'm still excited about self-driving technology, but my expectations for the near to medium term have become much more modest. I once imagined I could just sit in the backseat of a car and have it drive me to any destination while I futzed around on my phone in the backseat. It's seems clear now that for the foreseeable future, someone always needs to be in the driver's seat, ready to take over at a moment's notice, and the millions of hours of additional leisure team that might be returned to society are not going to materialize this way. We so often throw around self-driving cars as if they're an inevitability, but it may be the last few problems to solve in that space are the most difficult ones to surmount. Thus Moravec's Paradox: it's “difficult or impossible to give [computers] the skills of a one-year-old when it comes to perception and mobility.”

    What about an alternative approach, one that pairs humans with computers, a formula for many of the best solutions at this stage in history? What if the car could be driven by a remote driver, almost like a drone?

    We live in a country where the government assumes anyone past the age of 16 who passes a driver test can drive for life. Having nearly been run over by many an elderly driver in Palo Alto during my commute to work, I'm not sure that's such a sound assumption. Still, if you are sitting in a car and don't have to drive it yourself, and as long as you get where you need to go, whether a computer does the driving or a remote pilot handles the wheel, you still get that time back.

    Adventures in teaching self-driving cars

    For complicated moves like that, Thrun’s team often started with machine learning, then reinforced it with rule-based programming—a superego to control the id. They had the car teach itself to read street signs, for instance, but they underscored that knowledge with specific instructions: “stop” means stop. If the car still had trouble, they’d download the sensor data, replay it on the computer, and fine-tune the response. Other times, they’d run simulations based on accidents documented by the National Highway Traffic Safety Administration. A mattress falls from the back of a truck. Should the car swerve to avoid it or plow ahead? How much advance warning does it need? What if a cat runs into the road? A deer? A child? These were moral questions as well as mechanical ones, and engineers had never had to answer them before. The darpa cars didn’t even bother to distinguish between road signs and pedestrians—or “organics,” as engineers sometimes call them. They still thought like machines.

    Four-way stops were a good example. Most drivers don’t just sit and wait their turn. They nose into the intersection, nudging ahead while the previous car is still passing through. The Google car didn’t do that. Being a law-abiding robot, it waited until the crossing was completely clear—and promptly lost its place in line. “The nudging is a kind of communication,” Thrun told me. “It tells people that it’s your turn. The same thing with lane changes: if you start to pull into a gap and the driver in that lane moves forward, he’s giving you a clear no. If he pulls back, it’s a yes. The car has to learn that language.”

    From Burkhard Bilger's New Yorker piece on Google's self-driving car. The engineering issues they've had to deal with are fascinating.

    As many have noted, legal or regulatory risk may be the largest obstacle to seeing self-driving cars on our roads in volume. To counter that, I hypothesize that all self-driving will ship with a black box, like airplanes, and that all the cameras will record a continuous feed of video, that keeps overwriting itself, maybe a loop of the most recent 30 minutes of driving at all times, along with key sensor readings. That way if someone sees the self-driving sensor on a car they can't just back into the self-driving car or hurtle themselves across a windshield just to get a big settlement from Google.

    In fact, as sensors and video recording devices come down in cost, it may become law that all cars come with such accessories, self-driving or not, making it much easier to determine fault in car accidents. The same cost/weight improvements in video tech may make it so Amazon drones are also equipped with a continuously recording video camera, the better for determining who may have brought it down with a rock to steal its payload.

    Perhaps Google will take the continuous video feeds as a crowd-sourced way to update its street maps. That leads, of course, to the obvious drawback to such a scenario, the privacy concerns over how Google would use the data and video from the cars. That's a cultural issue and seems more tenable than the legal one, however.

    Google Autocomplete knows

    "Who knows what evil lurks in the hearts of men? The Shadow knows." 

    The modern equivalent, one might say, is Google Autocomplete. One ad campaign capitalizes on that by using Google Autocomplete suggestions to highlight gender inequality


    More of the images can be seen here

    I love Google Autocomplete, it is like some modern crowdsourced, mass-distributed free art installation. I have spent hours putting random queries in there just to plumb the collective mind of humanity.

    A surprising corporate giant

    What company, according to Fortune, is the eighth largest employer in the world, with over 549,000 employees globally?

    The answer shocked me: Volkswagen. That's just the tip of the iceberg in terms of fascinating tidbits from this mini profile.

    Efficiency experts will tell you that on an employee-per-vehicle basis, Volkswagen looks hopelessly inefficient. Financial analysts will tell you that the company woefully trails its competitors on a revenue-per-employee basis. But VW will tell you that it makes more money than any other automaker – by far.

    While VW's stated goal is to become the world's largest car company by 2018, it's already there if you measure it by revenue and profits. Its revenue of $200 billion is greater than every other OEM. Last year's operating profit of $14 billion is the kind of performance you expect from Big Oil companies, not automakers.

    Last year's operating profit of $14 billion is the kind of performance you expect from Big Oil companies, not automakers.

    How can this be possible? How can VW look so uncompetitive from a productivity standpoint, yet out-earn all of its competitors?

    Ah, that's the magic of VW's corporate structure. While business schools teach future MBAs that centralized operations can cut cost by eliminating overlapping work and duplication, VW maintains strongly decentralized operations with lots of overlap. While business schools preach the benefits of outsourcing to cut cost, VW is very vertically integrated.

    Anytime a car company buys a component from a supplier, that supplier has to charge a profit. If an automaker can make those components in-house, it gets to keep that profit. VW is building a lot of components in-house.

    To dominate you need multiple brands, and VW has more than anyone else.

    If an automaker truly wants to dominate the market, it has to accept a certain amount of overlap and duplication. It just goes with the territory. To dominate you need multiple brands, and VW has more than anyone else, which admittedly overlap at the edges. But to VW they are more than just brands.

    All of VW's brands (VW, Audi, Seat, Skoda, Bentley, Lamborghini, Ducati, Porsche, Bugatti, MAN, Scania, and VW Commercial) are treated as stand-alone companies. They have their own boards of directors, their own profit & loss statements, and their own annual reports. They even have their own separate design, engineering and manufacturing facilities. Yes, they do share some platforms and powertrains and purchasing, but other than that they're on their own.

    Anyone who works in technology will hear an echo in much of this strategy. Volkswagen's model of of running all its brands as independent companies is an example how the biggest tech companies try to push decision-making to the edges, to the teams running a variety of product lines, as a way of trying to remain entrepreneurial, innovative, and nimble. 

    The way Volkswagen has vertically integrated is reminiscent of the way Apple has, over time, taken over more and more of the computing value chain, down to opening their own retail stores. Given how Samsung is also vertically integrating and competing head on with Apple in the mobile computing market, it would be surprising if Apple didn't stop sourcing chips from Samsung and take their business elsewhere, to a partner less vertically integrated, like Taiwan Semiconductor.

    Volkswagen, by dint of its vertical integration, can capture value wherever it occurs in the value chain, and as the sources of value shift as it often does over the life cycle of technology products, Volkswagen retains its cut. On a related note, look at the last chart on this post at Asymco. Stunningly, Samsung makes more operating income from Android than Google is! In this mobile computing war, Samsung is making money off of both Google and Apple. After Apple, it's difficult to name another company that has profited more from the mobile computing revolution.

    Volkswagen is the answer to the subject of this post, but Samsung is nearly as shocking a dark horse of a corporate behemoth.