While there are wars fought over the three most common metrics, it turns out that the metric isn't what matters.
Many commentators have recently debated the relative merits of customer effort score (CES) vs. net promoter score (NPS). As a leader who remembers the controversy that surrounded NPS when it first came to dominance, I find the debate concerning. I still recall the effort people wasted trying to win the battle against NPS, pointing out its flaws and the lack of academic evidence for it, when we were really looking a gift horse in the mouth because of NPS. I would caution anyone currently worrying about whether CES is the “best metric” to remember the lessons that should have been learnt from “the NPS wars.”
For those not so close to the topic of customer experience metrics, although there any many different metrics that could be used to measure the experience your customers’ receive, three dominate the industry. They are customer satisfaction (CSat), NPS and now CES. These measure slightly different things, but are all reporting on ratings given by customers to a single question. Satisfaction captures emotional feeling about interaction with the organization (usually on a five-point scale). NPS captures an attitude following that interaction, i.e. likelihood to recommend, against a 0-10 scale. Detractors (those providing a 0-6 score) are subtracted from promoters (those with 9-10 ratings) to give a net score. CES returns to attitude about the interaction, but rather than asking about satisfaction it seeks to capture how much effort the customer had to put in to achieve what she wanted or needed (again on a five- point scale).
The reality, from my experience (excuse the pun), is that none of these metrics is perfect. Each has dangers of misrepresentation or simplification. I agree with Professor Moira Clark of Henley Centre of Customer Management. When we discussed this, we agreed that ideally all three would be captured by an organization. This is because satisfaction, likelihood-to-recommend and effort required are different lenses through which to study what you are getting right or wrong for your customers.
That utopia may not be possible for all organizations, depending on volume of transactions and your capability to randomly vary metrics captured and order of asking. But my main learning point from "the NPS wars" over a couple of years is that the metric is not the most important thing here. As the old saying goes, “It’s what you do with it that counts.”
After NPS won the war and began to be a required balanced scorecard metric for most CEOs, I learned that this was not a defeat but rather that gift horse. Because NPS had succeeded in capturing the imagination of CEOs, there was funding available to capture learning from this metric more robustly than was previously done for CSat.
So, over a year or so, I came to really value the NPS program we implemented. This was mainly because of its granularity (by product and touchpoint) and the “driver questions” that we captured immediately afterward. Together, these provided a richer understanding of what was good or bad in the interaction, enabled prompt response to individual customers and targeted action to implement systemic improvements.
Now we appear to be at a similar point with CES, and I want to caution about being drawn into another metric war. There are certainly things that can be improved about the way the proposed CES question is framed (I have found it more useful to reword and capture “how easy was it to…” or “how much effort did you need to put into…”). However, as I hope we all learned with NPS, I would encourage organizations to focus on how you implement any CES program (or enhance your existing NPS program) to maximize learning and the ability to take action. That is where the real value lies.
Another tip: Using learning from your existing research, including qualitative, can help frame additional questions to capture following CES. You can then use analytics to identify correlations. Having such robust regular quantitative data capture is much more valuable than being "right" about your lead metric.