Why Net Promoter Score May Not Align With Business Results

I just received a great question: “Why do companies have a very healthy growth although their NPS is low and vice versa why can growth be decreasing although the NPS is very high?” I get asked versions of this question all the time, so I decided to capture my typical answers in this blog post (check out our Net Promoter Score (NPS) Resource Page).

My take: We’ve found a high correlation between NPS and customer loyalty across a large number of industries. But that does not mean that NPS will provide a clear understanding of a company’s business results. There are many reasons why a company’s business might perform differently than its NPS might suggest. Here are some of the common reasons that I’ve seen:

  • NPS is not the ultimate question. In many situations, the amounts of promoters and detractors are roughly correlated with customer loyalty and business success, but that’s not always the case. It’s not a universally good metric as it’s not correlated to business success in all situations. For example, NPS may not be at all indicative of business success if customers are trapped because of a high switching cost, limited competition or monopolistic power of the company, unique product or service offerings, etc.
  • Comparison NPS trumps absolute NPS. In general, health plans have low NPS scores yet many of them do well financially. Customers may not be likely to recommend their health plan, but if they don’t believe that there are any better options then it will not affect their loyalty.
  • B2B roles are under-appreciated. There are different dynamics in B2B situations. If we ask treasury assistants in large companies to provide an NPS for commercial banks, we might believe that it should represent the health of a bank’s business. But what happens if CFOs, who control the banking decisions, give banks  a completely different NPS?
  • Non-customers are often overlooked. A retailer may have a high NPS, but still lose share if its products and services start appealing to a narrower audience. This type of situation is often missed, because companies tend to get considerably more feedback from existing customers than from prospective non-customers.
  • Segmentation can alter the analysis. When an organization looks at its overall NPS, it might miss important trends in different customer groups. What happens if NPS is getting lower for high value customers and getting higher for low value customers? The overall NPS could stay the same or even improve while the company’s results decline.
  • Survey design affects results. Many companies have a mismatch between the way they deploy NPS surveys and the insights they attempt to glean from the data. Companies ask the NPS questions at different times and frequencies, which can affect the overall results. If we ask NPS after a customer service event, then the results will likely be different then if we ask it periodically to a random sampling of customers.

The bottom line: NPS can be an effective metric in many situations, but only if used correctly

Report: Tech Vendor NPS Benchmark, 2013

1306_IT_NPSBenchmark_COVERWe just published a Temkin Group report, Tech Vendor NPS Benchmark, 2013, The research examines Net Promoter Scores and the link to loyalty for 54 tech vendors based on feedback from IT decision makers. We also compared results to the NPS data we published last year. Here’s the executive summary:

We surveyed IT decision makers from more than 800 large North American firms to understand how they view their tech vendors. One of the questions we asked provides Net Promoter Scores® (NPS®) for 54 of those companies. VMWare and SAP analytics earned the highest NPS while CSC IT services and Infosys IT services earned the lowest. The overall industry average NPS dropped nine points from last year. Our analysis also examined the link between NPS and loyalty, finding that compared with detractors, promoters are more than six times as likely to forgive a tech vendor if they deliver a bad experience, almost six times as likely to try a new offering from the vendor, and more than three times as likely to purchase more from them this year. When examining the loyalty levels for each vendor, we found that Oracle consulting and VMWare clients have the strongest purchase intentions, SAP analytics and Sybase have earned the most forgiveness, and VMWare and SAP analytics have the most innovation equity.

Download report for $495 (includes Excel spreadsheet with data)
BuyDownload3

Here are some of the findings from the research:

  • With an NPS of 47, VMware came out on top followed closely by SAP analytics with 45. At the other end of the spectrum, four tech vendors have negative NPS: CSC IT services, Infosys IT services, Alcatel-Lucent, and Deloitte consulting.
  • The average NPS in the tech industry went from 33.6 in 2012 to 24.7 in 2013. The percentage of promoters dropped seven points.
  • Compared with detractors, we found that promoters are more than six times likely to forgive a tech vendor if they deliver a bad experience, almost six times as likely to try a new offering from the company, and more than three times as likely to purchase more from them in 2013.
  • Forgiveness and willingness to try increase steadily starting at 3 while increased purchases begins steady growth at 5.
  • Promoters most frequently wanted lower prices and better support, while passives and detractors were looking for better support.
  • Oracle outsourcing has the strongest purchase intentions while Trend Micro has the weakest.
  • SAP analytics and Sybase have earned the most forgiveness while Trend Micro has earned the least.
  • VMware has the most innovation equity while Accenture consulting and Intuit have the least.

1306_ITNPS2

1305_ITNPS_Economics

Download report for $495 (includes Excel spreadsheet with data)
BuyDownload3

The bottom line: When it comes to NPS, large tech vendors are heading in the wrong directions

Note: See our 2012 NPS ratings for tech vendors and the post 9 Recommendations For Net Promoter Score along with all of my other posts about NPS.

P.S. Net Promoter Score, Net Promoter, and NPS are registered trademarks of Bain & Company, Satmetrix Systems, and Fred Reichheld.

Report: The Economics of Net Promoter

EconomicsOfNPS_COVER

We just published a Temkin Group report, The Economics of Net Promoter, which examines the link between NPS and loyalty across 19 industries. Here’s the executive summary:

Net Promoter Score (NPS) is a popular metric, but how does it relate to loyalty? We analyzed responses from thousands of consumers and examined the connection between NPS and three areas of loyalty: likelihood to repurchase, likelihood to forgive, and the actual number of times they recommend a company. Compared to detractors, promoters are almost six times as likely to forgive, are more than five times as likely to repurchase, and are more than twice as likely as detractors to actually recommend a company. Examining the data, we also found that consumers who gave a score between 0 and 4 have particularly low levels of loyalty. The analysis examines 19 industries: airlines, appliance makers, auto dealers, banks, car rental agencies, computer makers, credit card issuers, fast food chains, grocery chains, health plans, hotel chains, insurance carriers, Internet service providers, investment firms, parcel delivery services, retailers, software firms, TV service providers, and wireless carriers. Promoters who are likely to repurchase range from 87% for grocery chains to 73% for TV service providers, those who are likely to forgive range from 72% for rental car agencies to 59% for TV service providers, and those who actually recommended a company range from 80% for retailers to 47% for parcel delivery services.

Download report for $295 (includes Excel dataset)
BuyDownload3

Here’s the first figure from the report. It has a total of 43 figures that include specific graphics for each of the 19 industries in the study.

NPSeconomics

Here’s an excerpt from the first section that examines the data across all industries:

To understand how NPS relates to customer loyalty, we examined NPS scores for companies across 19 industries based on feedback from 10,000 U.S. consumers. The analysis covers more than 95,000 pieces of feedback from consumers about those companies. Examining three areas of loyalty across industries, looking at promoters versus detractors, we found that:

  • Promoters are almost six times as likely to forgive. We asked consumers about their likelihood to forgive a company if it delivered a bad experience and found that 64% of promoters are likely to forgive compared with 11% of detractors.
  • Promoters are more than five times as likely to repurchase. We asked consumers about their likelihood to make additional purchases from a company and found that 81% of promoters are likely to repurchase compared with 16% of detractors.
  • Promoters are more than twice as likely as detractors to actually recommend. In a separate study of 5,000 U.S. consumers, we asked consumers how many times they actually recommended each company. It turns out that 64% of promoters have recommended the company compared with 24% of detractors.

We also examined the level of loyalty across each response on the NPS scale between 0 and 10. This analysis shows that:

  • Super detractors are much less loyal. Forgiveness and repurchase loyalty stay at a consistent low level between 0 and 4 on the scale. Actual recommendations begin to increase after 5.
  • Midpoint attracts low recommenders. When we examine the actual quantity of recommendations across the NPS scale it turns out that there’s significant drop in recommendations at the midpoint of the scale, when 5 is selected.
  • Text anchors attract responses. We analyzed the volume of responses across the 11 point scale. Consumers appear to select the three responses with text anchors at a disproportionately high rate: “0,” “5,” and “10.”

Download report for $295 (includes Excel dataset)BuyDownload3The Excel file provides all of the data from the 43 figures. Note: See our report, Net Promoter Score Benchmark Study, 2012 and the post 9 Recommendations For Net Promoter Score along with all of my other resources for NPS programs.

The bottom line: Promoters are more loyal than detractors.

P.S. Net Promoter Score, Net Promoter, and NPS are registered trademarks of Bain & Company, Satmetrix Systems, and Fred Reichheld.

Net Promoter Scores Vary By Region

We recently published a benchmark of Net Promoter Scores of 180 companies across 19 industries. Someone asked me if the scores varied across different parts of the U.S. To be honest, I had never thought about that question and had certainly never researched it.

As you may be able to tell, I have a hard time leaving a quesiton unanswered. So I examined the data by region for all 19 industries. As you can see in the chart below:

  • NPS is highest in the South for 16 out of 19 industries.
  • NPS is lowest in the West for 13  out of 19 industries.
  • There’s a double-digit NPS gap in nine industries.
  • The largest NPS gapsare as follows:
    • Major appliances (21 point gap between South and West)
    • Grocery chans (18 point gap between Midwest/South and West)
    • Hotel Chains (17 point gap between South and Northeast)

The bottom line: Want to improve NPS? Survey more from the South and less from the West ;-)

Net Promoter Labels Obscure Actual Recommendation Patterns

We recently published a benchmark of Net Promoter Scores of 180 companies across 19 industries. Within that research, we showed that promoters are more likely than detractors to repurchase. In a previous blog post, we examined how promoters and detractors actually recommend companies.

In this post, we go a step further and look at how consumers actually recommend based on the specific response to the NPS question. As you can see in the graphic below:

  • Zero means no. If someone picks the lowest score on this scale, then they rarely recommend a firm.
  • One to five is a neutral zone. Consumers that choose the next five higher responses have about the same frequency of recommending, between 18% and 29%.
  • Everything counts from six on. Thirty-two percent of consumers who selected six on the scale actually recommended those companies; the level of actual recommendations ramps up from there for each score higher on the scale
  • NPS labels hide some insight. The NPS process labels people who select “0″ to “6″ as detractors, “7″ or “8″ are called passives, and “9″ or “10″ are promoters. Theses labels may not accurately describe recommendation patterns. For instance, a detractor that selects “0″ is quite different than a detractor that selects “6.”
  • Five may be a negative collector. It appears that consumers may be selecting “5″ (the midpoint of the scale) when they are relatively upset. It could be that the selection of a “5″ out of “10″ is considered a failing score for many people (just as a 50 out of 100 points on a test would be seen as failing). This phenomena could explain the drop-off in recommendations at that level. We’ll continue to study this issue since it might require companies to rethink how they examine their survey results.

The bottom line: Not all promoters and detractors are alike.

Most Promoters Promote And So Do Some Detractors

We recently published a benchmark of Net Promoter Scores of 180 companies across 19 industries. Within that research, we showed that promoters are more likely than detractors to repurchase. In a separate analysis, we examined the actual number of times that people recommended companies and compared that to the NPS rating they gave those companies. As you can see in the graphic below:

  • Promoters are more likely to recommend. Promoters are much more likely than detractors to recommend a company across all industries.
  • Not all promoters recommend. The number of promoters that recommended a company ranges from 47% (parcel delivery services) to 80% (retailers).
  • Some detractors recommend. The number of detractors that recommended a company ranges from 13% for banks to 39% of retailers.

What doesn’t show-up in this graphic is the intensity of recommendations. Our detailed analysis examined the frequency of recommendations across all of these industries. Promoters are an average of 3.4x more likely than detractors to recommend companies to three or more friends (ranging from 2.1x more likely for major appliances to 5.7x more likely for banks).

Bottom line: NPS is a good, but not perfect indication of actual recommendations

Report: Net Promoter Score Benchmark Study, 2012

We just published a Temkin Group report, Net Promoter Score Benchmark Study, 2012. It provides NPS data on 175 U.S. companies across 19 industries. Here’s the executive summary:

USAA took the top two spots for its banking and insurance businesses while HSBC came in at the bottom for banking and credit cards. Our analysis of differences across consumer demographic segments showed that NPS tends to go up with age, doesn’t vary much by income levels, and is often highest with Asians. We also asked consumers what would make them more likely to recommend the companies and found that promoters are more likely to select lower prices and detractors are more likely to select better customer service. While there is some debate about the efficacy of NPS, our analysis shows that promoters are much more likely than detractors to purchase more in the future across all industries. To help you implement a successful NPS program, we’ve included eight tips such as don’t believe in an “ultimate question” and use control charts, not pinpointed goals. The industries included in this report are airlines, auto dealers, banks, computer makers, credit card issuers, fast food chains, grocery chains, health plans, hotel chains, insurance carriers, Internet service providers, investment firms, major appliance makers, parcel delivery services, rental car agencies, retailers, software firms, TV service providers, and wireless carriers.

Download report for $295
(includes the data)

The industries included in this report are airlines, auto dealers, banks, computer makers, credit card issuers, fast food chains, grocery chains, health plans, hotel chains, insurance carriers, Internet service providers, investment firms, major appliance makers, parcel delivery services, rental car agencies, retailers, software firms, TV service providers, and wireless carriers.

The report contains the following components:

  • NPS for 175 companies across 19 industries
  • NPS differences based on age, income, and ethnicity of consumers
  • Improvement areas selected by promoters and detractors by industry
  • Connection between NPS and future purchases by industry
  • Eight tips for implementing a successful NPS program

Figure1Figure4

Download report for $295
(Includes the data)

The bottom line:  Companies need to give customers a reason to recommend them

Obama and Romney Promoters By Income and Employment

In my previous two posts, I examined the Net Promoter Scores (NPS) for President Obama and Mitt Romney and the issues that their Promoters care about.

In this post, I examine the percentage of U.S. consumers that are Promoters (likely to recommend the candidate to their friends or relatives) of the candidates based on their annual income levels and their current employment status. As you can see in the infographic below:

  • Obama has the largest advantage with consumers making less than $25,000 per year and the smallest lead with consumers making between $75,000 and $100,000 per year
  • Romney’s support increases with income level
  • Both of the candidates have their strongest support from high-income consumers
  • Obama has the largest advantage with students and the smallest lead with unemployed consumers

The bottom line: Obama’s strongest base are low income consumers and students

Issues That Separate Obama and Romney Promoters

In my previous post, I examined the Net Promoter Scores (NPS) for President Obama and Mitt Romney. The research, which is based on a survey of 5,000 U.S. consumers in August, showed that Obama scored higher than Romney. Both candidates, however, have very low NPS (-57% for Romney and -33% for Obama).

In this post, I’m examining the issues that U.S. citizens care about, honing in on the differences between Obama and Romney promoters (consumers that are likely to recommend the candidate to their friends or relatives). We asked consumers about 11 different issues. As you can see in the infographic below:

  • The most important issue for both Obama and Romney promoters is improving the U.S. economy and the bottom issue is the candidates’ religious views.
  • Romney promoters view eight of the 11 issues as being more important than do Obama promoters; the only exceptions are the candidate’s positions on healthcare, gay marriage and abortion rights.
  • The three issues that Romney promoters are more likely to see as important than Obama promoters are their position on U.S. relations with Israel (+26), their position on international terrorism (+16), and their religious views (+9).
  • More consumers prefer Obama’s position across all of the issues, which is not surprising considering that Obama has a larger number of promoters.
  • Consumers show the largest preference for Obama’s vision for the future of the U.S. and his position on healthcare (42%).
  • Consumers show the largest preference for Romney for his plans to improve the U.S. economy (33%) and his position on healthcare (32%).
  • U.S. consumers have the least preference when it comes to the candidates’ religious views.
  • Obama supporters show more preference for Obama’s views than Romney’s promoters do for his views in 10 of the 11 issues; the only exception is their position on U.S. relations with Israel.
  • Obama promoters show the largest preference gap when it comes to the candidates’ positions on abortion rights (+12) and gay rights (+10).

The bottom line: Consumers really care about the economy and a vision for the future

What Drives Net Promoter Scores (NPS) in IT?

A previous post examined Net Promoter Scores (NPS) for tech vendors and the relationship between NPS and market share based on feedback from IT decision makers within large firms. Since I’ve had questions about that post, I decided to examine a common question: What’s driving those NPS scores? It turns out that the answer (no surprise) is customer experience.

We examined a number of metrics and their relationship with NPS in two areas:

  • Correlation (R). This looks at how connected one metric is to another, ranging from -1.0 to 1.0. A correlation above 0.5 is strongly positive and above 0.7 is very strongly positive.
  • Slope. This looks at the change in NPS that relates to a one-point change in the metric. A higher slope means a change in the metric has a higher change in NPS.

Our first analysis examined NPS scores versus the Temkin Experience Ratings for Tech Vendors. It turns out that there was a very strong correlation (R= 0.77) and the slope is 1.13.

We then examined the correlation and slope between NPS and components of the Temkin Experience Ratings as well as with product and relationship satisfaction scores.

Here are some observations from the analysis:

  • Customer experience is critical. Temkin Experience Ratings has the highest impact on NPS, with the highest overall correlation and slope.
  • You have to be easy to do business with. The highest individual correlation (.75) and slope (1.11) is with the accessible element of the Temkin Experience Ratings, which looks at how easy the company is to work with.
  • Relationship trumps product. It turns out that the correlations are about the same for relationship satisfaction and product satisfaction, but the slope is much higher for relationship satisfaction.
  • Cost of ownership stands out. When it comes to the slopes, cost of ownership (.99) stands out amongst the satisfaction items. Support of account team (.86) is also relatively high.

The bottom line: To improve NPS, improve customer experience.

You can purchase this data for $295. The Excel spreadsheet contains NPS, Temkin Experience Ratings, relationship satisfaction, and product satisfaction data for 60 tech vendors in the analysis as well as for 28 others with sample sizes of less than 60 respondents.

Net Promoter Score and Market Share For 60 Tech Vendors

Temkin Group recently surveyed 800 IT professionals from large companies and asked them a series of questions about tech vendors. This research has fueled some of our previous posts: Temkin Experience Ratings for Tech Vendors, How IT Professionals Share Feedback About Vendors, and Tech Vendors: Benchmarking Product and Relationship Satisfaction of IT Clients.

We also asked the IT professionals to rate each tech vendor on the Net Promoter Score (NPS) scale.* NPS is based on one question: How likely are you to recommend the tech vendor to a friend or colleague? IT professionals choose an answer on a scale from 0 (not at all likely) to 10 (extremely likely). Responses are put into one of three categories:

  • Promoters (score 9 or 10)
  • Passives (score 7 or 8)
  • Detractors (score 0 to 6)

NPS is calculated as the percentage of promoters minus the percentage of detractors. (If you’re interested in best practices for using NPS, read my post 9 Recommendations for NPS which is also part of our VoC resource page).

Here is the NPS for 60 tech vendors, ranging from Intel, Microsoft and Cisco in the 50s down to Compuware, Unisys, Cognizant, and Capgemini below 10.

We also asked the IT professionals how much their company was planning to spend in 2012 compared with 2011 and mapped this data with NPS. It turns out that we found four bands of performance in this market based on NPS scores:

  • More than 40: These companies have much higher purchase momentum and are poised to grab a lot of market share
  • Between 28 and 40: These companies have above average purchase momentum and are poised to gain market share
  • Between 23 and 28: These companies have below average purchase momentum and are poised to lose market share
  • Less than 23: These companies have much lower purchase momentum and are poised to give up a lot of market share

You can purchase the data in an excel spreadsheet for $195. The file includes details on the 60 tech vendors shown in this blog post as well as 28 other tech vendors with sample sizes too small to be included in our published research. The data includes sample sizes for the companies, percentages for promoters, detractors, and NPS score, as well as the percentage of companies with increasing spending plans and those with decreasing spending plans.

*Note: Net Promoter, NPS, and Net Promoter Score are trademarks of Satmetrix Systems, Bain & Company, and Fred Reichheld

CX Insights From Marriott And JetBlue

I recently spoke at the Customer Experience Strategies Summit in Toronto. While I was there, I was able to catch a few of the other speakers. I really enjoyed hearing from the Marriott and JetBlue speakers. Both companies did very well in the 2011 Temkin Experience Ratings; Marriott was the top rated hotel chain and JetBlue was the second rated airline (behind Southwest). Here are some of the interesting details that they shared:

Scott Allison, VP of Canadian Operations, Marriott:

  • Allison shared this great comment: “culture trumps brand.” Marriott links its strategy as a company with its strategy as en employer. Bill Marriott still visits a lot of hotels and when he does, he first goes to employee areas like employee entrances and break rooms. Employees are also trained on what makes their brand special.
  • Hotel general managers need to hit targets for both customer experience and employee satisfaction to get their bonuses.
  • Something goes wrong, even if it’s a small thing, during about one-quarter of stays — so hotels need to be good at recovery. That’s why the Ritz Carlton empowers associates to spend up to $1,000 per day per guest to improve someone’s stay. The staff reviews guest situations at the beginning of every day, they call it “Stand Up” at Marriott and “Line Up” at the Ritz.

Vicky Stennes, VP of Inflight Experience, JetBlue:

  • Net Promoter Score (NPS) is one of the key measures that the company uses. It also uses J. D. Power which breaks measurements into “people-related” and “non people-related” categories.
  • A couple of years ago, the company noticed a slip in its “people-related” scores so it started a program called “Culture is Service” (CIS). [Note: CIS was discussed in the CEO's letter to shareholders in JetBlue's 2010 Annual Report]. As part of CIS, more than 1,000 “crew members” (across the organization) went through training focused on three areas: Inform: Educate everyone on JetBlue’s current state of service, the measurements that it tracks, and share insights on how crew member behaviors affect customer experience; Engage: Elicit an open dialogue with real-time cross-functional problem solving; and Inspire: Give them a sense of the concept of unexpected moments and recognize the great work of crew members over the company’s first 10 years of operations.
  • JetBlue sees success of the CIS program because of an improvement in employee NPS scores of training attendees.
  • Moving ahead, they are looking to add a few things to the CIS training: cross-functional design sessions and education on linking NPS to specific behaviors and to revenue.
  • Stennes shared data that showed correlation between pilot in-flight communications and NPS. They use this data to show pilots that the way they communicate with passengers plays an important role in passenger loyalty.
  • The company also tracks a “Net Helpfulness Score” along with NPS for each flight. They will start using these scores to define scores for crews across their different flights.
  • Stennes also shared some great data: Every 5 promoters leads to 2 new customers and every 16 detractors leads to the loss of 1 customer. A promoter is worth $33 extra dollars ($27 from referrals and $6 from loyalty) to JetBlue while a detractor is worth $104 less than average. One point change in JetBlue’s NPS is worth $5 to $8 million.

The bottom line: Great brands spend a lot of time focusing on their people

CX Mistake #7: Obsessing About Detractors

In this series of posts, we examine some of the top mistakes companies make in their customer experience management efforts. This post examines mistake #7: Obsessing about detractors. Customer experience programs often spend most of their time fixing problems so customers don’t dislike them, but they don’t spend enough time figuring out how to make customers love them.

It’s always important to create operating processes that deliver consistently good experiences. But consistency is a minimum requirement for strong brands. To make a deep connection with customers, experiences need to reinforce other key attributes of a brand. In a recent Temkin Group study, we found that only 14% of companies target campaigns at their brand promoters. Customer experience efforts aren’t purposely ignoring advocates, but the environment in which they operate pushes them in that direction. Here are some of the contributing factors:

  • Customer feedback overemphasizes problems. Customers are most articulate about their dislikes. In recent Temkin Group research, we found that 34% of US consumers give feedback to a company after a very bad experience, but only 21% did the same after a very good experience. So normal customer listening mechanisms push companies to focus on problems.
  • Understanding dissatisfaction does not help you understand loyalty. It might seem reasonable that focusing on dissatisfaction would help you learn about loyalty. But it turns out that the attributes that makes people unhappy are often not the same things that make them very happy. If the brakes in my car don’t work, then I’m very unhappy with the car. If my brakes work, then I don’t think about it. So companies often lack insight into what causes customers to become advocates.
  • Executives overreact to problems. When executives hear about a single customer issue, they often push hard on the organization to fix the problem. When they get feedback from a happy customer, they just say “great job” but don’t push their organization to make any changes. Over time, this creates a lot more energy towards fixing problems than towards creating customer advocates.
  • …and they don’t like talking about emotions. Customer experience is a combination of what consumers do, think, and feel. But executives are often more comfortable focusing on the most tangible items, what customers do and think. Given this bias, corporate plans inadvertently focus on creating satisfied customers, not engaged brand advocates.

Here are some tips for avoiding this mistake:

  • Create a stream of activity around advocacy building. You shouldn’t stop finding and fixing problems, but you need to make sure that you are also identifying and implementing things that create brand advocates. So establish a separate track of activity around “raving fans” so that it gets unique attention. 
  • Translate the brand into desired attitudes. Customer experience management efforts should create customer attitudes and behaviors that support business objectives. So make sure you explicitly describe the desired attitudes of customers that will reinforce your brand and use that information when you design and examine experiences.
  • Map your customer’s journey. One of the most effective tools for understanding how customers feel about your company is a customer journey map. If you don’t understand how customers view their interactions with you, then you won’t be able to turn them into advocates.
  • Use alternative research. Traditional market research approaches of surveys and focus groups can uncover what people like and dislike, but they may not uncover what people really desire. Why? Because customers can’t often articulate what they really desire. That’s why you should incorporate qualitative research techniques like contextual inquiry, shadowing, and journaling.
  • Infuse emotion in the design. Since experiences are made up of functional, accessible, and emotional attributes, it’s critical that customer experience designs incorporate all three attributes. Make sure you put desired feelings into the design requirements.
  • Don’t track average or net scores. While coming up with a single metric may be interesting, it blurs the distinction between really happy and really unhappy customers. Make sure you have a measurement and goal around really happy customers. If you’re using Net Promoter Score, for instance, start tracking promoters and detractors separately.

The bottom line: Design for love, not just for eliminating hate

CX Mistake #9: Falling In Love With A Metric

In this series of posts, we examine some of the top mistakes companies make in their customer experience management efforts. This post examines mistake #9: Falling In Love With A Metric. Companies often get enamored with a metric like Net Promoter Score (NPS) and lose sight of what’s really important, making improvements.

Customer experience efforts absolutely need metrics and measurements. While there’s value in collecting that data to measure or track customer experience, the true power comes when it provides insight into where and how to make improvements. But some organizations over-emphasize the metric. Companies going down this path can run into problems when they:

  • Rush into compensation. Tying business results to metrics can be a good thing. But tying too much compensation too early to any metric can also cause a lot of problems. If people have a significant part of their pay tied to a metric they don’t understand or don’t know how to affect, then they will either ignore it, get bitter about it, or find ways to “game” the system. Customer experience doesn’t improve if salespeople are calling out to customers and begging them to give higher ratings on a survey.
  • Can’t answer “why.” Reporting on a metric can highlight strengths and weaknesses of a company’s overall customer experience. So it’s understandable that some executive teams push for widespread use of those metrics without caring about the overall set of information collected from customers. But if the company does not understand “why” customers are either happy or unhappy, then they can’t systematically improve customer experience and positively affect the metric.
  • Overuse a metric. Understanding if a customer is happy overall with an organization is quite different than understanding if her needs were met during a specific service call. But some companies blindly use the same metrics for each of those areas. A metric like NPS, for example, may be appropriate for examining relationship strength, but it’s necessarily good for evaluating interactions.
  • Forget their uniqueness. Every business has a unique set of strengths, weaknesses, goals and ambitions. But when it comes to customer experience metrics, companies often want to use the same measures as everyone else. While this may enable benchmarking comparison to other firms, it does not necessarily measure how the company is progressing towards its unique goals.

Here are some tips for avoiding this mistake:

  • Treat relationships and interactions differently. The questions you ask a customer about how they view your company can (and often should) be quite different than those that you ask about an interaction. Think about different questions and methods for five different types of insights: Relationship tracking, interaction monitoring, continuous listening, project infusion, and periodic immersion.
  • Deploy shadow metrics before making large incentive changes. To help leaders in your company understand the impact of customer experience incentives, put in place the metrics you are thinking about for a couple of periods before actually making them “live.” That way people can see how it will affect them before actually does affect them.
  • Establish performance bands, not absolute targets. Customer feedback metrics can be a bit jittery. Sometimes it can be very hard to explain small movements since there’s always some variance due to sampling limitations. Rather than establishing “a number” as the goal, set targets for high and low scores. Success comes from consistently execceding the low band.
  • Measure relevant attitudes and behaviors. Businesses aren’t in the business of getting random people to recommend them. They hope to get that type of loyalty from successfully executing their mission. Develop measurements that test the attitudes and behaviors of target customer segments, making sure they line up with your specific business and brand strategy.
  • Build a robust voice of the customer (VoC) program. Creating isolated metrics will not drive change in an organization; especially when people don’t understand what drives the metric. Companies need to develop a voice of the customer (VoC) program that continuously shares actionable customer insights across the organization.

The bottom line: Use metrics to improve the experience, not just measure it

9 Recommendations For Net Promoter Score (NPS)

This week is the Net Promoter Conference in London. Since these events often spur a ton of questions about Net Promoter Score (NPS), I put together one of my periodic posts about NPS. If you’re not familiar with NPS, it’s based on asking customers a question like this:

How likely are you to recommend <COMPANY> to a friend or colleague?

Respondents are categorized as “Promoters,” “Detractors,” or “Passives” based on their answers. The Net Promoter Score (NPS) is calculated by subtracting the percentage of Detractors from the percentage of Promoters (Passives are ignored).

My take: Let me start looking at NPS with some data points from the report, The State Of Customer Experience Management, 2011:

  • 48% of large companies (more than $500M in revenues) are using NPS
  • 67% of those using NPS report positive results (15% say it’s too early to tell)
  • 84% of large firms with voice of the customer programs (including those that use NPS), report success from those efforts

NPS can be a valuable metric, but only when incorporated within a strong voice of the customer (VoC) program. Here are a handful of overall recommendations about NPS:

  1. Stop dreaming about an “ultimate question.” Having worked with dozens of organizations on their NPS efforts, I can tell you that the NPS question is not nirvana. Even the most successful users of NPS ask customers a series of questions and get feedback through a portfolio of mechanisms.
  2. Look for magic in the “why.” To some degree, it’s useless to know if someone is likely or unlikely to recommend you if you don’t also understand why they feel that way. So you need to make sure customer feedback helps you understand why customers feel the way that they do. Which leads to my next recommendation…
  3. Focus on improvements, not questions. Feedback is cheap, but customer-insightful actions are precious. The goal for any feedback mechanism (like NPS) is to drive improvements in your business. Successful NPS programs have strong closed-loop VoC programs that go from detection of customer perceptions to deployment of improvements (see my post about the 6 Ds of a voice of the customer program).
  4. Don’t lose sight of segments. An overall NPS score across your customers may be a good metric for aligning focus across the company, but it’s not very diagnostic. A good VoC program needs to track this type of data across key customer segments and understand which interactions (“moments of truth”) are driving those scores.
  5. Understand the elements of experience. When it comes to making improvements, you need to understand the three core elements of any experience: Functional, Accessible, and Emotional. A good program needs to provides insights into how customers perceive each of these elements.
  6. De-emphasize the “N” in NPS. NPS improves by eliminating Detractors or by increasing Promoters. but those changes can also offset each other. So the “netting” of the scores removes important clarity. Companies need to look at the rise and fall of Promoters and Detractors independently, since the changes needed to affect these areas are often quite different.
  7. Tap into the power of the language. There’s a lot of data to suggest that other measures such as the ACSI’s satisfaction index are as good as NPS (many people argue that it’s better, but I don’t want to enter that debate). What sets NPS apart is the wonderfully clear language around “Promoters” and “Detractors.” Make sure that the education across the company focuses heavily on those terms.
  8. Build a strong VoC program, with or without NPS. The overall program is more important than the choice of a metric like NPS. So make sure you focus on building a strong VoC program whether or not you use NPS (check out our VoC resource page).
  9. Remember, this is a long-term journey. Companies can make short-term improvements with superficial changes, but long-term success requires institutional capabilities. Start by understanding the 6 laws of customer experience and create a roadmap for building four customer experience core competencies: Purposeful Leadership, Compelling Brand Values, Employee Engagement, and Customer Connectedness.

The bottom line: Successful NPS implementations require strong VoC programs

Follow

Get every new post delivered to your Inbox.

Join 2,632 other followers

%d bloggers like this: