Your Users Are Irrational

Dave Feldman
11 min readJan 30, 2018

“Your app is broken: I’m not receiving messages!” said a worrying number of people about our app back in 2014. What was wrong? Nothing showed up in our analytics, nor were we able to replicate the problem ourselves.

Eventually we figured it out. Those users had disabled notifications. That’s right: They said they didn’t want notifications; then, when they didn’t get any notifications, they reported it as a bug. What were they thinking?

User errors are the bane of a developer’s existence. We’ve all experienced that strange blend of relief and rage when we realize a thorny issue is, in fact, user misunderstanding. It even has its own acronym, PEBKAC (“Problem Exists Between Keyboard and Chair”), now delightfully antiquated since there’s often neither a keyboard nor a chair involved.

It’s easy to blame the user, or to assume they’re clueless. But at some point we find ourselves making the same mistakes they do, and those excuses start to ring hollow. Users aren’t clueless. They’re irrational — and we can use that to build better products.

Until the mid-1900s, economics rested on a notion of humans as rational actors — individuals who consistently make logical decisions in their own best interests. This led to some interesting conclusions: that stock-market bubbles can’t exist, or that guests at a dinner party will stop eating the hors d’oeuvres so they have room for dinner.

In recent decades, some economists and psychologists have challenged this — led by pioneers like Daniel Kahneman, Amos Tversky, Dan Ariely, and Richard Thaler (winner of last year’s Nobel Prize in Economics). Their work is exciting not merely for identifying human irrationality, but for showing how predictable that irrationality is — and therefore how it can dramatically affect economic outcomes, even when considered over large groups of people.

In their 2008 book Nudge, Thaler and Cass Sunstein provide a framework for using this knowledge to improve public policy. They introduce the concept of choice architecture:

If you design the ballot voters use to choose candidates, you are a choice architect. If you are a doctor and must describe the alternative treatments available to a patient, you are a choice architect. If you design the form that new employees fill out to enroll in the company health care plan, you are a choice architect. If you are a parent, describing possible educational options to your son or daughter, you are a choice architect.

Thaler, Richard H.; Sunstein, Cass R.. Nudge: Improving Decisions About Health, Wealth, and Happiness (p3).

Product designers are choice architects, too. Virtually every part of a product experience revolves around choices — from blatant ones like, “Do you want to enable notifications?” to subtler ones like which font you use for your PowerPoint presentation.

Often, we leave choice up to the user. After all, if we’re not being evil, we don’t want to manipulate them. We strive for full disclosure or “informed choice.” We explain what we’re doing and ensure the user has control at all times — in part because, when we ask them, that’s what users want: the context and information needed to make their own, rational decisions.

I think this mirrors the situation in economics half a century ago. Much of the time we’re like classical economists, expecting rational actors when in fact we’re addressing humans.

For example: One foundational behavioral-science principle is the power of the default: The impact that a default option can have on a choice. Thaler and Sunstein describe a 2003 study of organ-donation preferences. All participants were asked if they wanted to be an organ donor, but each experimental group saw a different default: one got an opt-in (effectively, the “organ donor” box was unchecked), another an opt-out (the box was checked).

In a world of rational actors, the default would have no impact — especially in an explicit study (people are paying more attention) of something as personal as organ-donation (people are really paying more attention). But, “The default mattered — a lot. When participants had to opt in to being an organ donor, only 42 percent did so. But when they had to opt out, 82 percent agreed to be donors.” (Thaler & Sunstein p180)

That’s a huge difference. And it holds outside the lab, too: “In Germany, which uses an opt-in system, only 12 percent of the citizens gave their consent, whereas in Austria, nearly everyone (99 percent) did.” (Thaler & Sunstein p180)

Many decisions are less explicit…but no less subject to cognitive bias. Suppose you’re a designer on Google Docs. You’d like to help your users create attractive, readable documents, even if they’re not designers. Today, when they insert a table, it looks like this:

Default appearance of a Google Docs table

As tables go, it’s not bad. But it could be better: the dark borders serve as chartjunk, creating visual noise that distracts from the data; and the header isn’t distinguished from the content. So, you decide to change the default to this:

Revised table style

Much better — well, I think so anyway. And the power of the default means that going forward, most GDocs tables will be styled that way.

But doesn’t this manipulate users into adopting your preferred aesthetic instead of their own? Might it be better to stick with today’s more generic default, inviting users to express themselves however they like? In a word, no:

The first misconception is that it is possible to avoid influencing people’s choices. In many situations, some organization or agent must make a choice that will affect the behavior of some other people. There is, in those situations, no way of avoiding nudging in some direction, and whether intended or not, these nudges will affect what people choose…people’s choices are pervasively influenced by the design elements selected by choice architects. (Thaler & Sunstein p10)

We can’t help manipulating users; that’s why the default is so powerful. So we’re going to manipulate most people into a table style. A few will change the default, but the majority won’t.

Can we eliminate the default and force a choice? In fact, the organ-donation study had another group do just that. They faced an explicit choice — like two radio buttons with neither selected. There, 79% of participants agreed to be donors.

We can do the same with our GDocs design:

Eliminating the default with an explicit choice

But we don’t really want that, do we? It adds friction to an otherwise one-step process. It interrupts the user’s train of thought with cognitive load: She’s thinking, “Give me some boxes into which to put this data,” and we’re asking about colors and fonts.

Remember, too, that we’re not denying choice to anyone. Our user can always format the table to her liking, just as participants in the organ-donation survey could change their selections. The question is not whether choice is available, but whether it’s required — and, if not, what the easy path is.

Still, we’ve seen that the default can be powerful. If we know we can use it to influence a choice, which default do we pick? The one that serves our business goals? The public good? Thaler and Sunstein propose the notion of libertarian paternalism to guide us:

The libertarian aspect of our strategies lies in the straightforward insistence that, in general, people should be free to do what they like — and to opt out of undesirable arrangements if they want to do so…

…The paternalistic aspect lies in the claim that it is legitimate for choice architects to try to influence people’s behavior in order to make their lives longer, healthier, and better. In other words, we argue for self-conscious efforts…to steer people’s choices in directions that will improve their lives…a policy is “paternalistic” if it tries to influence choices in a way that will make choosers better off, as judged by themselves. (Thaler & Sunstein p5)

In other words, we try to nudge people in the direction they’d want to be nudged, if they could examine the choice from a rational vantage point. And we give them the opportunity to override the nudge, should they choose to do so.

Libertarian paternalism won’t make our design choices for us, but it does help frame them. In the organ-donation case, it may be a tough call. There are strong opinions on both sides; do we therefore avoid a default? Do we assume those with strong opinions will make an explicit choice no matter what, and optimize for the greatest societal good with an opt-out?

There’s more to choice architecture than selecting a default — which brings us back to our messaging app. The problem wasn’t merely that people had disabled notifications; it was that (a) they’d made a choice that wasn’t what they actually wanted, and (b) confronted with that choice later on, they still defended it.

Standard iPhone permission dialog

Apple places tight constraints on how an app requests a permission. It can trigger a dialog like the one at left but has no control over the dialog’s appearance or content. And it can make the request once and only once.

For rational users, Apple’s approach is near-perfect. It’s a one-time choice; the developer has the opportunity to offer it at a contextually-relevant moment; and the lack of an explicit default protects users from making the wrong decision by accident.

But a rational user wouldn’t have the problem ours did: declining notifications for a messaging app, when it was clear they wanted to receive them. So what was going on?

In Thinking, Fast and Slow, Daniel Kahneman divides the human brain into two systems, helpfully labeled System 1 and System 2:

System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control.

System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration…

Although System 2 believes itself to be where the action is, the automatic System 1 is the hero of the book. I describe System 1 as effortlessly originating impressions and feelings that are the main sources of the explicit beliefs and deliberate choices of System 2.

Kahneman, Daniel. Thinking, Fast and Slow (Kindle Locations 273–276). Farrar, Straus and Giroux.

System 1 is remarkable for its speed and apparent effortlessness, but there’s a cost. It processes, it pattern-matches, it associates — but it doesn’t really reason. And because it’s so much faster than System 2, even when we think we’re being rational, we may merely be rationalizing the conclusions of System 1.

David Eagleman provides a powerful example in his book Incognito:

[A patient] had suffered quite a bit of damage to her brain tissue…Dr. Karthik Sarma had noticed the night before that when he asked her to close her eyes, she would close only one and not the other. So he and I went to examine this more carefully. When I asked her to close her eyes, she said “Okay,” and closed one eye, as in a permanent wink. “Are your eyes closed?” I asked. “Yes,” she said. “Both eyes?” “Yes.” I held up three fingers. “How many fingers am I holding up, Mrs. G.?” “Three,” she said. “And your eyes are closed?” “Yes.” In a nonchallenging way I said, “Then how did you know how many fingers I was holding up?” An interesting silence followed.

Eagleman, David. Incognito: The Secret Lives of the Brain (p136). Knopf Doubleday Publishing Group.

Consider our notification problem in terms of these systems. A user downloads our messaging app. Her System 2 is focused on the task at hand: exploring this new product, perhaps using it to message a friend. So when the permission dialog appears, System 1 handles it. System 1 recognizes the dialog as a permission request — and associates it with the overall annoyance of constantly dealing with such interruptions. It sees the word “notifications” (but not “messages”), which recalls the constant deluge of intrusive notifications on her lock screen. It remembers various news reports on personal data and privacy violations. Faced with a web of negative impressions, it rejects the permission and moves on. System 2 may be vaguely aware of this, or may not even notice.

The problem is compounded by the fact that some users may not fully understand the difference between messages and notifications. But even those who are savvy can fall prey to the situation above, because System 1 is doing fast association rather than slow, linear consideration.

How might this perspective influence us as product designers? We’ve already learned to be deliberate about how we request permissions — for instance, asking for Photos access when the user tries to post a photo rather than at launch; or preceding the OS-level permission dialog with an app-specific one to provide context:

Setting context: Spark Mail and Facebook Messenger

We can refine these designs by remembering that our audience is System 1. That suggests two approaches:

  1. Speak to System 1’s associative, pattern-matching nature. Facebook has done this with the word messages, whose associations will be more positive and more relevant than notifications. The please may help, too, as an emotional appeal. Spark goes further with the word smart and the image of what you’ll get if you say yes.
  2. Get System 1 to alert System 2. I suspect Spark does this better: the dialog is a bit non-standard, whereas Messenger’s dialog looks almost exactly like the OS permission dialog. But even Spark could probably take this further by presenting something truly unfamiliar. It’s an opportunity to break one’s design system deliberately.

Given how heavily developers are constrained here, and the backflips everyone does to address those constraints, I would love to see Apple revisit this from a behavioral-science perspective. (This is especially true for notifications: they don’t expose user data to developers, the permission is hard to evaluate in the abstract, and it would be easy to offer contextual controls once a notification arrives — a practice that also better aligns developer and user incentives.)

The power of the default and Systems 1 & 2 are the tip of the iceberg, but I hope it’s clear what an impact behavioral-science thinking — and the re-casting of users as quirky, human actors — can have on how we design products. It’s not merely that people are irrational; it’s that they’re predictably irrational. That predictability means we can anticipate how they’ll react to the choices we give them and the experiences we craft for them. By taking control of the inevitable nudges we’re giving people, and applying the principles of libertarian paternalism, we can design products that better meet user needs, feel more effortless, and are less prone to the dreaded “user error.”

One final note: The behavioral-science principles I cite here are well-understood, their application to product design less so. We have work to do in developing and testing this practice of “irrational design.” And my own understanding of behavioral science is still rudimentary; I’m both hopeful and fearful that real behavioral scientists will come along and rip this piece (instructively) to shreds.

--

--

Dave Feldman

Multidisciplinary product / design leader in Berlin. 2x founder: Miter, Emu. Alum of Heap, Google, Facebook, Yahoo, Harvard. I bring great products to life.