You’ve probably gotten this email: “We value your business. Please answer our one-question survey…”
The survey question is, “On a scale of 1 to 10, how likely are you to recommend this product to a friend?”
Take care how you answer. Someone’s job might be riding on whether you pick an 8 or a 9.
In doing research for a client this week, I have learned a lot about this common survey question and how it is analyzed. I’ve been amazed to discover that many businesses now build their strategy around the answers to that question.
There is a trademarked process for analysis of the answers. It’s called Net Promoter Score, NPS for short. Here’s how it works.
People who give a 9 or 10 to this question are called “promoters” because they are the ones who actively promote your company or product. Those who answer in the range of 1 to 6 are considered “detractors” because they are “unhappy customers trapped in a bad relationship,” says Bain and Company, one of the three who hold the trademark for NPS.
Those customers who answer 7 or 8 are considered “passive,” meaning that they aren’t unhappy but they don’t care enough to promote your company and could be won over by a competitor, the NPS developers declare.
The NPS is derived by subtracting the percentage of detractors from the percentage of promoters. So it’s possible to have a score as low as -100, if all your customers were detractors, or as high as 100, if all your customers were promoters.
Lots of companies have negative scores, and according to Bain the average firm scores just 5 to 10 percent. In other words, promoters scarcely outnumber detractors.
Whole companies have emerged to offer tools and consulting on this one concept. Whole customer relations programs are built entirely on the answers to the question and its follow-up, “Why did you give that score?” If a company’s NPS misses the target, heads may roll.
See any problems here?
There’s a very basic one: NPS is derived from a survey. And if it’s an emailed survey, as it usually is, the response rate is typically 10 to 15%. That means it’s not a true random sample – those who are especially happy or especially unhappy are more likely to respond. A survey that’s not based on a true random sample is inherently inaccurate.
That’s just one of the reasons that researchers hate NPS. They also have pointed out:
- The answers can vary depending on how soon after the interaction with the company the survey is taken. Our memories are short.
- Results vary from one industry to another. Notes one critic, “Even the creators of NPS acknowledge there’s little correlation to revenue growth in some industries.”
- The scale itself is culturally skewed, so NPS scores for national and global companies are even more problematic.
- If the company doesn’t carefully analyze the followup “Why?” question, or worse yet doesn’t ask it at all, the results of the NPS question are easily misinterpreted.
I make a point of responding to customer surveys, and I was pretty startled to think that giving a score of 7 or 8 makes me “passive.” For me, that range of scores means, “If someone asked for a recommendation in this industry or product line, or asked me about this company, I’d give it a positive referral.” Why doesn’t my score count for anything?
As another author pointed out, sometimes even a 9 or 10 doesn’t mean you actually love the company. It might mean, “This is the only real practical choice you have, even though it sucks,” as in the case of an airline that serves a particular city with lots of routes.
When you’re talking about companies that provide products and services for other businesses, the equation gets much more complex. There are often a number of players who make the decisions on these B2B purchases, and they all have different perspectives on the purchase. To get a good read on their satisfaction, you’d have to interview nearly everyone – and even then, the simplistic NPS question doesn’t cover the nuances of what goes into making the purchase.
All of these flaws in NPS haven’t stopped businesses across the country, large and small, from adopting NPS as the Holy Grail of customer satisfaction.
Within a short time of its introduction, analysts were pointing out its flaws, but nothing could stop the tide, John H. Fleming, Gallup’s chief scientist for market consulting, wrote in 2006:
… primarily because [NPS] takes something that’s been considered complex and makes it astonishingly simple.
Here’s the thing: customer satisfaction IS complex. No single score is ever going to give a business what it needs to know about that.
Or, as another writer noted, “So, sadly, hard work still has something to do with success.”
I get that it’s hard to be in business these days, given how fussy we consumers have all become. I get that it’s expensive to measure and analyze customer satisfaction accurately.
But if you’re going to succeed in business, that’s the price you have to pay.
Since consumer spending is 70 percent of the U.S. Gross Domestic Product, we should all do our part and answer those surveys.
Today’s penny is a 2003, the year that NPS was introduced to the world in a Harvard Business Review article.