Are You Asking Customers the Wrong Questions? Understanding Customer vs. User Research

Dave Feldman
7 min readAug 8, 2023

--

Created via Photoshop’s (beta) generative AI.

Talk to your customers, constantly. I doubt that’s controversial: most of us have acknowledged the “lean” methodology that says gathering customer context is an ongoing responsibility of every product team. But the wrong conversation can yield the wrong conclusions. To ensure you’re having the right conversations with your customers, start by focusing on the difference between customer and user research.

Customer vs. User Research

How we research — how we gather that all-important customer context — can dramatically affect our outcomes. And yet, while research is a complex discipline unto itself, many product teams are asked to do it without deep knowledge of that discipline. All too often, teams do little more than show a customer the roadmap or some mockups and say, “So, what do you think?”

Ideally, you have a dedicated researcher on staff to help you — but a lot of startups don’t. In that case, a few simple guidelines make a tremendous difference in the value, quality, and applicability of your research. A great place to start: understand the difference between customer research and user research.

That distinction can be subtle because your customer and your user may be the same person — that is, they’re different lenses. Here’s how I think about it:

  • A customer is somebody who buys or chooses your product. They make the choice to purchase or download it in the first place, or to renew or upgrade. So the customer lens is all about what people want, think they want or need, or will pay for. It addresses someone’s perceptions about themselves and their organization.
  • A user is somebody using your product — interacting with it day to day. They may also be a customer. The user lens focuses on what people need, will use, or currently do. It deals with actual behavior.

If you’re at a B2B SaaS company, this distinction is probably familiar: you have different personas at each customer company, some of whom are involved in the purchasing process (customers and maybe users) and some who aren’t (just users). If you’re direct-to-consumer, the distinction may be newer: that consumer is both your buyer/customer and your user, so it’s easy to conflate the two.

Mixed-Up Lenses

It’s easy to lose sight of the customer/user distinction as we do our discovery and research. For example: several years ago I worked with a company whose customers regularly asked for a particular feature — let’s call it Cowbells. Quarter after quarter, Cowbells were the number-one request of the Product team by Sales and Account Management. Product investigated the situation via surveys and interviews, and repeatedly concluded Cowbells weren’t worth building: customers weren’t actually going to use them. They were definitely asking for them, but they didn’t really need them.

Apparently, generative AI has not been trained on the music-instrument definition of “cowbell”.

This yielded months of difficult, unproductive debate. Sales and Account Management were convinced they needed more Cowbells to close deals; Product was convinced they’d be building something nobody used.

The problem: everyone was right. Go-to-Market was looking through a customer lens. Customers weren’t saying, “We need Cowbells so that we can accomplish task X” (a user need). They were saying something else. Maybe their management were pressuring them for Cowbells based on something they’d read. Maybe they were future-proofing without a clearly-defined goal. Maybe the competition had Cowbells and they wanted to check a box.

Those answers were out there, but because Sales and Account Management were hearing about this through the customer lens, that’s where Product needed to do its research. Instead, Product was researching through a user lens and coming up empty.

Impact on Methodology

So what does that mean, in concrete terms? How do we deliberately choose to do customer research vs. user research? Well, it’s about which methods we choose, and how we structure them. I’ll illustrate via a few common techniques.

Surveys & Interviews

These are workhorses for gathering qualitative context across both lenses, but the approach should differ:

  • Customer Research: Ask questions about how customers perceive their current tools and workflows. “What’s working for you? What’s not?” — “How much would you pay for X?” — “If you had to choose between X, Y, and Z, which would you choose?” — “What frustrates you about this?” — “What’s your impression of this landing page / mockup?”
  • User Research: Ask questions that reflect on users’ day-to-day, or target their perceptions: “How many hours a day do you spend doing X?” — “What’s the most rewarding / frustrating part of your day?” — “How would you describe this feature to a friend?” — “Where would you click if you wanted to do X?”

Painted-Door Tests

A painted-door test sets up a landing page and directs a targeted group of people to it, typically via ads. It’s a great way to test messaging, segmentation, and target personas. Painted-door tests are customer research because they get purely at what people want; they can’t dig into whether people will use it, understand it, or stick with it.

Email & Ad Campaigns

Campaigns are a superset of painted-door tests: targeted messaging leading to a destination. As such they’re primarily customer-research tools, but because the destination could be the actual product, with sufficient top-of-funnel volume they can be used to answer user questions, e.g., “Which user profiles are most likely to activate / retain?” (Or if you prefer, “What’s the ‘market’ in our product-market fit?”)

Product Analytics

Tools like Firebase Analytics or Heap will tell you who’s doing what in your product, and are useful across both lenses:

  • Customer Research: Is the messaging on our website leading to conversions? Is that upsell we just put in the product delivering results? Do people who click our Schedule a Demo button follow through? What’s the right CTA to put above the fold?
  • User Research: Do users who open Feature X complete it and, if not, where do they drop off? Which users who sign up are still with us a month later (that is, how might we define activation)? How is that activation metric trending? Is there a meaningful difference in overall retention between users who’ve tried Feature X and those who haven’t? What features simply aren’t getting used?

Usability Tests

A usability test involves asking people to walk through a task — in the product itself, or via mockups or a prototype. It’s a staple of tactical user research: we’re trying to determine whether the thing we’ve built meets a user need and does so in a way they understand (matches their mental model).

These are regularly misused as customer research: we ask the subject what they thought of whatever they tested, whether they’d use it, whether they’d pay for it, whether they’d recommend it to a friend. These are fine questions to ask in order to get at a user’s thinking (“What let you do that answer?”) but risky when we treat them as customer research because:

  • Users are change-averse. This is the famous-but-apocryphal “faster horse” problem: the bolder the feature, the more it requires a shift in the user’s mental model, the more likely they are to say, “I wouldn’t use this.” Treating a usability test as customer research is a great way to kill your biggest bets.
  • The answers to customer-level questions will rest on a user’s preexisting mental model, combined with an artificial, abstract scenario. Their positivity will rest on their level of understanding, which again favors the most tactical changes.
  • The user’s evaluation will conflate their interest in some form of the product/feature (which is what you care about, customer-wise) with this particular form of it (which is what you care about, user-wise). It’s not, “Would you want feature X?” but rather, “Would you want this specific experience of feature X?” A slick prototype might give you a false positive; a couple easily-fixed usability issues might produce a false negative.

Customer Satisfaction (CSAT) Prompts

CSAT prompts are single-question surveys, typically delivered as popups or by email, designed to measure aggregate satisfaction and loyalty.

(The most popular methodology is Net Promoter Score, which assumes a 6-out-of-10 score is negative and an 8 should be ignored. I’ve been unable to find an evidence-based rationale for it, and have found plenty debunking it, but everybody uses it anyway. Nonetheless, I believe a well-constructed CSAT approach is possible — and the article linked above suggests some approaches.)

I’d argue CSAT is a key performance indicator (KPI) rather than either customer or user research. Combined with other data, though — segmentation, user behavior, engagement & retention, or just a simple follow-up question — CSAT can be a solid lead-in to both lenses.

Feedback

The feedback you get from users and customers — directly, or indirectly via go-to-market teams — is an incredibly valuable source of data, especially when you take the time to aggregate it.

In its raw form, all such feedback is customer research. It can directly influence how you market your product, and what you may need in order to close a sale.

But it can also be a powerful indicator for where you need user research. When someone takes the time to complain (or compliment), that’s a signal about something. What is that something? You’ll have to follow up with user research to find out.

That’s not a comprehensive list, but should cover some of the most common techniques. And in doing so, I hope I’ve demonstrated not only why the distinction between user and customer research is so important, but also that focusing on it can clarify and improve your methodology.

If you’ve read a few of my articles, you know I’m a fan of setting outcome-centric goals for just about everything. Research is no exception, but can be challenging because it’s open-ended and exploratory by nature. Thinking about these two lenses allows us to establish focused goals without losing that open-endedness — to ensure the questions we’re asking, how we’re asking them, and the way we interpret the answers move our products forward.

Want help gathering better customer context? Looking to improve product quality or process overall? I’m available for consulting and fractional executive engagements, as well as full-time positions in Europe. Drop me a line.

--

--

Dave Feldman

Multidisciplinary product leader in Berlin. Founder of Miter, Emu. Alum of Heap, Google, Meta, Yahoo, Harvard. I bring great products & teams to life.