We’ve seen this before
A recent review of tracking data with one of my e-commerce clients involved a discussion around NPS, or the “Net Promoter Score”, a measure that has been widely adopted in survey and customer satisfaction research. In research and marketing circles, familiarity with NPS is high, as is the 0-10 (thus 11) point scale from which it is derived. The scale defines three groups: Promoters, Passives, and Detractors. The percent who are Promoters minus the percent who are Detractors (multiplied by 100) is the NPS. I won’t go into the mechanics of how the groups are defined because that, too, is widely known. The issue for my client was that their NPS score was low when compared to other e-commerce sites and/or businesses they considered to be competitors. The question was: Why?
My client runs a dealer network, in which dealers sell their wares through the client’s e-commerce platform. You will recall that NPS is based on the concept of a recommendation to friends or colleagues. However, in a dealer network, dealers are effectively competing with one another. As a result, a dealer’s willingness to recommend my client’s products actually results in the dealer helping a competitor, which is not exactly what the dealer had in mind! We might find this same dynamic in a number of business-to-business or competitive marketplaces in which a recommendation is, effectively, self-defeating. This makes NPS totally inappropriate for businesses in which a “recommendation” is unnatural. I can recall personal situations when people have asked me for contractor recommendations; I’ll give them names but hope they won’t steal my network!
This seemingly small hiccup in the conceptual approach to NPS raised a larger issue. Many senior marketing executives believe that they need but one consumer measure (gathered in customer research) to determine customer health, and can overlook other measures of satisfaction or happiness that have also proven to be valuable indicators. Specifically, overall satisfaction, various measures of quality, perceived value, appropriateness for use, and so on all have their place and, collectively, provide a multi-dimensional view of customer health. There are also other self-reported behavioral measures (e.g., frequency of purchase, sizes, flavors, etc.) as well as attitudinal measures that can aid in interpretation of the broader consumer mindset.
Never supported by data
In his wonderful book “How Brands Grow”, Byron Sharp thoroughly debunks the notion of NPS, as well as the “thought experiment” that assumed zero cost for customer retention. You may recall that NPS was presumed to measure the level of retention a company could expect to achieve. There was little empirical evidence to support the relationship between high NPS and high customer retention. Sharp goes on to demonstrate that high levels of customer retention are (a) impossible to achieve, even in businesses presumed to have high levels of loyalty, due to natural switching behavior and (b) even high levels of customer retention cannot generate any significant positive impact on revenue because the number of truly loyal customers is so small.
Like a bad penny
Yet NPS persists. Why? Initially, NPS adoption was driven by its Harvard Business School imprimatur, but more importantly, it is simply due to inertia: it has already become institutionalized at many companies. For those who do not have time to dig below the surface, NPS serves a purpose in that it is simple. And, there are available published benchmarks which lend some comfort to marketing management — i.e., there is always some professional pressure to rely on what is au courant: the SVP of marketing will look snappy in the C-suite.
I suppose that in organizations with limited resources, who can only afford to ask one summary question, NPS will do. But an overall happiness, satisfaction, or liking measure is always superior. In my statistical work linking evaluative measures, such as intent to purchase, satisfaction, or willingness to recommend, I have consistently found higher correlations between satisfaction and a hard dependent measures of consumption (i.e., transactions or sales) than with a willingness to recommend (NPS). I suspect that this is because “willingness to recommend” is one step further removed from actual product experience or satisfaction. For example, think of a deodorant or toothpaste. You may be very happy with your deodorant, in which case you will report high satisfaction, but will you go out of your way to recommended it to someone else? That doesn’t pass the smell test (sorry, too tempting a pun).
If you are considering a customer satisfaction or tracking program, and are exploring what measures to use, experiment a bit. Choose from other measures that are more directly linked to customer happiness and brand health, and build a normative database that is best for your business around them. I highly recommend it (oops)!