My Latest 9 Recommendations For NPS

We study 100’s of companies that use Net Promoter® Score (NPS®) and work with dozens of others that are in different stages of NPS deployment. We also publish one of the most comprehensive annual NPS benchmark studies. This gives us a unique view on how organizations use this popular customer experience metric.

Every year or so, after being asked a series of similar questions about NPS, I write a blog post with a collection of my thoughts. Before I get to my current thinking (which has remained relatively consistent over the years), here are some previous posts you might be interested in reading:

As you probably know, there are people who love NPS and people who hate it. I am neither. I’ve seen NPS used as an effective metric to drive change, and I’ve seen it used in ways that are harming organizations. I could also say that about almost every metric that I’ve seen being used.

Having established all of that, here are nine of my current recommendations:

  1. The choice of metric is not as important as people think. We rarely see a company succeed or fail based on the specific metric that it choses. That doesn’t mean that you can chose a ridiculous metric, but most reasonable metrics provide the same potential for success (and failure). In general, NPS is a reasonable metric to chose, as our data shows that it often correlates to customer loyalty. As organizations mature, we try to get them to use metrics that are more closely aligned to their brand promises.
  2. Driving improvements is what’s critical. Instead of obsessing about the specific metric being used, companies need to obsess about the system they put in place to make changes based on what they learn from using the metric. Successful NPS programs systematically take action on insights they uncover. If the program is working well, then the company isn’t debating scores. Instead, they’re continuously making changes to create more promoters and eliminate detractors.
  3. Promoters & detractors need their individual attention. The most important thing you can do with NPS is to understanding what is driving NPS. It turns out that the things that create promoters are not just the opposite side of the issues that create detractors. So you need to separately identify changes to create promoters and decrease detractors. All too often, companies focus just on detractors. This helps to fix problems, but it does not identify opportunities to propel your organization. By focusing on what causes promoters, you will get the opportunity to engage the organization in uplifting discussions—instead of just beating the drum about what’s broken.
  4. Sampling patterns really, really matter. The approach for sampling often has a very significant impact on NPS results (and results from other metrics as well). If you have multiple segments of customers and they each have a different NPS profile (as many do), then your overall NPS can change wildly based on the mix of those customers that are included in the NPS calculation. In B2B, this may come from combining results from enterprise accounts with smaller clients, or mixing responses from executive decision makers and end users of your products. In B2C, the variance may come form mixing data between new customers and repeat customers.
  5. NPS is for relationships, not transactions. Asking people if they would recommend a company isn’t a good question to use after an interaction. If a customer is a detractor on an NPS survey deployed right after a call into the contact center, for instance, then it doesn’t necessarily mean that there was a problem with that interaction. The contact center might have done a great job on the call, but the customer may still dislike something else about the company. If the contact center interaction had been problematic, then the customer’s NPS score might be temporarily lowered and not reflective of the customer’s longer-term view of the company.
  6. NPS is for teams, not individuals. Since NPS asks about the likelihood to recommend a company, it actually reflects the actions of more than one person. So if you want to look for someone to hold responsible for NPS results, think about making it a shared metric across a large group, not an individual KPI. Many companies that fall in love with NPS, start applying it to every part of their business, trying to give everyone their own NPS. While it’s worthwhile to look for improvements across the business based on NPS, you run into problems when you try to create to many levels of NPS.
  7. Compensation can be a real problem. When an organization shares a common metric (like NPS) and its people collectively have some compensation tied to it, then it can help align everyone’s focus on customer experience. But if the compensation gets too significant, then people start focusing too much on the number—questioning its validity and strong-arming customers—instead of looking for ways to make improvements. Remember, the majority of your discussions should be about making improvements, not data.
  8. Target ranges make more sense than single numbers. NPS is an inherently jittery metric; there’s only a porous line keeping passives from becoming promoters or detractors. And the situation is magnified by the sampling issues described above. That’s why we see many customer insights group wasting a lot of time running around trying to explain small movements in their companies’ NPS, as executives overreact to small movements. Instead of setting NPS goals as a specific number, consider defining a range (similar to a process control chart). As a start, think about adopting a 3- to 5-point range. That way you only react to results outside of the range or multiple periods of increases or declines.
  9. There are four loops to close. When people talk about closed loop and NPS, they often mean contacting customers after they answer the NPS question. But that immediate response is just one what we call the four customer insight-driven action loops: Immediate response, corrective action, continuous improvement, and strategic change. Any NPS program should put in places processes to close all four loops.

The bottom line: NPS success comes from the process, not the metric.

Net Promoter Score, Net Promoter, and NPS are registered trademarks of Bain & Company, Satmetrix Systems, and Fred Reichheld.

Use Customer Insights To Close Four Loops

Companies that have voice of the customer (VoC) programs (including NPS) often put in place a closed-loop process. Those efforts often focus on closing a single loop, immediately responding to a customer after they respond to a survey. But this represents only one of four loops that companies need to close.

In the report, Make Your VoC Action-Oriented, we introduced the concept of four closed loops.

1608_FourActionLoopsHere’s an example of the four loops for a restaurant chain:

  • Immediate Response. Reach out to a restaurant customer who responded on a survey that the bathroom was dirty and help take care of her ongoing concerns.
  • Corrective Action. Get the manager or employee to clean the bathroom in that restaurant.
  • Continuous Improvement. Create new process for restaurants to check and clean bathrooms on a regular basis.
  • Strategic Change. As part of new restaurant formats, design bathrooms so that they don’t require as much time from employees to keep them clean.

The bottom line: Make sure to build out four closed loops.

Sydney Signage Considers Customer Journeys

I’m having a great time in Australia, enjoying the country while (hopefully) sharing some strong CX skills and knowledge during events in Sydney and Melbourne. One of the things that I noticed in Sydney were the road signs. Here’s an example.

IMG_3947

The signage answers the basic question (it’s Yurong Street), and additionally it answers other questions for people who may want to go to Woolloomoolloo, the airport or Bondi.

This is a great example of Customer Journey Thinking. Whoever put together these signs was thinking about what travelers are trying to accomplish, not just focusing on their immediate interaction of getting the name of a street.

Thanks to all of the wonderful people (and creatures) that we’ve had the opportunity to meet (and to introduce to the red iGuy from our logo)! Read more of this post

Report: Five C’s of Mobile VoC Disruption

1607_MobileVOCDisruption_COVERWe just published a Temkin Group report, Five C’s of Mobile VoC DisruptionBest Practices for Embracing the Power of Mobile in Your Voice of the Customer Program.

As mobile continues to grow in importance, companies will need to renovate their voice of the customer (VoC) programs. Why? Because mobile is more than just another communications channel – it is transforming the way that companies and customers interact. To help companies modernize their VoC programs to account for this increase in mobile usage, we’ve identified the key areas in which mobile is different from other channels, what we call the “Five C’s of Mobile VoC Disruption: “Condensed, Comprehensive, Current, Conversational, and Contextual. These disruptive characteristics will force companies to redefine how they capture, share, and act on customer insights. We’ve identified more than 20 best practices that span all areas of a VoC program, including soliciting in-the-moment feedback for key interactions and accelerating the sharing of useful insights. In order to use mobile successfully, companies need to evolve through three stages of change: 1) Mobile-Enabled, 2) Mobile-Adjusted, and 3) Mobile-First.

Download report for $195
BuyDownload3

Here’s an overview of the Five C’s:

5CsOfMobileVoC

Download report for $195
BuyDownload3

Report: Emotion-Infused Experience Design

1606_EmotionInfusedExperienceDesign_COVERWe just published a Temkin Group report, Emotion-Infused Experience Design.

Emotions play an essential role in how people make decisions. Consequently, how a customer feels about their experience with a company has the most significant impact on their loyalty to that company. And yet despite their importance, both customers and companies agree that organizations do a poor job of engaging customers’ emotions. To help companies create a stronger emotional connection with customers, we’ve developed an approach called Emotion-Infused Experience Design (EIxD). To master EIxD, organizations must continuously focus on three questions: “Who exactly are these people (who happen to be our customers)?” “What is our organizational personality?” and “How do we want customers to feel?” This report offers both advice and examples about how to apply these three questions across four facets that affect emotion: senses, feelings, social, and values. And to help infuse these practices across the organization, we have also identified some strategies for how to turn employees into agents of EIxD.

Download report for $195
BuyDownload3

Our research shows that emotion is often a missing link in customer experience. While emotions may seem ephemeral and subjective, we developed a concrete methodology you can use to design for emotion. We call this methodology “Emotion-Infused Experience Design” (EIxD), and we define it as:

An approach for deliberately creating interactions that evoke specific customer emotions.

To master EIxD, you must ask (and answer) three questions throughout the entire design process:

  1. Who exactly are these people (who happen to be our customers)? You cannot design emotionally engaging experiences without a solid grasp on who your target customers are—what they want, what they need, what makes them tick.
  2. What is our organizational personality? Research shows that people relate to companies as if they are fellow human beings rather than inanimate corporate entities.
  3. How do we want our customers to feel? People are inherently emotional beings, and every interaction they have with you will make them feel a certain way—whether you intend it to or not.

To address the three questions of EIxD, this report shows how to design around four elements of emotion: senses, feelings, social, and values. Here are two of the 26 figures in the report:

1606_TwoPartsofEmotion1605_CokeStarbucksEmotions

Download report for $195
BuyDownload3

Three Ideas to Re-Humanize Patient Experience

I was recently interviewed for an article that discusses a post where Fox News journalist John Stossel describes his experience as a lung cancer patient at the New York-Presbyterian Hospital.

First of all, I hope that Stossel’s treatment is successful. And although I don’t fully agree with his analysis of the industry, I do agree with his observation “…I have to say, the hospital’s customer service stinks.” Yes, there is a problem with patient experience.

I’m reminded of this picture from a post that I wrote in 2009, which comes from Cleveland Clinic’s 2008 Annual Report.

ClevelandClinicAnnualReport

With all of the focus on costs and liabilities, the medical system has forgotten about the soul of the patient. It’s become dehumanized.

The wellbeing of a patient often takes a back seat to rigid processes and procedures, and there’s little understanding of how to help patients make increasingly important financial/medical trade-offs. It’s not that doctors, nurses, and hospital staffs don’t care. It’s just that the entire system has conspired to de-emphasize humanity.

This problem is not unique to healthcare. In research that we did in 2013, we found that only 30% of employees have what Aristotle called “practical wisdom,” the combination of moral will and moral skill. This is the capability that Barry Schwartz explains is critical for infusing humanity within organizations.

While there are many structural issues in U.S. healthcare (which I won’t go into here), there are still many things that can be done to re-humanize the patient experience. Here are some ideas:

  • Apply better experience design. Health care leaders should learn and apply the the principles of People-Centric Experience Design: align with purpose, guide with empathy, and design for memories.
  • Develop a value mindset. As patients take on more of the direct financial burden for healthcare, doctors must do more than recommend treatments and procedures. They must help patients understand the value of those activities, so that they can make smart financial/medical trade-offs.
  • Build decision-support technology. Patients should be able to understand the efficacy and full costs of the treatments and procedures that they are being asked to “purchase.” Health plans need to take the lead in providing tools for making this information transparent, and empowering patients to make better decisions.

The bottom line: It’s time to re-humanize healthcare

 

Epidemic of Emotionless Experience Design

As I’ve discussed many times on this blog, customers experience interactions across three dimensions, Success, Effort, and Emotion. So how effective are companies at proactively designing for those elements? Not very.

In our latest CX management study, we surveyed 252 companies with at least $100 million in annual revenues and asked them about their experience design effectiveness. As you can see in the graphic below:

  • Only about one in 10 companies is very good at proactively designing for any aspect of customer experience.
  • More companies are good at designing for success (completion on interactions) than effort or emotion, but less than half of companies consider themselves good in this area.
  • Emotion is the weakest link, as only about one-third of companies think they are good at proactive emotional design.

1604_ExperienceDesignEffectiveness_v2

If companies don’t improve their experience design skills, then their customer experience will never be better than inconsistent. And the biggest problem is emotion, which happens to drive the most loyalty.

If you want to fix this problem, we’ve got some help. Keep an eye on this blog for a new Temkin Group report on emotional experience design, which we’ll be publishing in a couple of weeks.

The bottom line: Join the Intensity Emotion Movement!

Our CX Data Doesn’t Match Industry Benchmarks, Now What?

I am often asked some version of this question:

We just saw the <Temkin Experience Ratings, Temkin Group’s NPS benchmarkForrester’s CXi, JD Powers, The ASCI> and it is completely different from what our internal data is telling us. How should we reconcile the two data points?

Given that I created several of those industry measurements, it’s fair for people to ask me that question. Here’s my answer…

Different Measurement Systems Deliver Different Results

There is no perfect or “ultimate” customer measurement system, since we can never know what every customer is thinking at every moment in time. So all measurement systems are, by definition, somewhat flawed. This is an important point, because we need to let go of the desire to identify which one has the “right” information.

Every customer measurement system can differ along a number of dimensions. In particular, these are often key differences between your internal system and industry benchmark studies:

  1. Who’s being surveyed? Temkin Experience Ratings, for instance, asks questions to randomly selected consumers who have interacted with companies. Internal measurements are often less random, since they may neglect people who haven’t provided contact information or people who are no longer customers.
  2. When are they being surveyed? Temkin Experience Ratings, for instance, asks questions during January. Internal measurements may ask questions throughout the year, after specific interactions, or during specific periods of time.
  3. What’s being asked? Temkin Experience Ratings, for instance, asks three questions, covering Success, Effort and Emotion), on a seven point scale. Internal measurements can be almost anything, including Net Promoter Score that is standardized on an 11-point scale, but we’ve seen companies use 7- and 10-point scales as well.
  4. How is the metric calculated? Temkin Experience Ratings, for instance, is an average of net scores for Success, Effort, and Emotion, which are calculated by taking the percentage of 6s and 7s, and then subtracting the percentage of 1s, 2s, and 3s. Internal measurements may be average scores, they may be segmented by different customer groups, they may be top box or top 2 box, or just about anything.

Given that internal measurement systems are typically different than industry measurements across those four items, it shouldn’t be a surprise that they often deliver different results.

My Take: Rely on Your Internal Data

Instead of trying to find which one of the metrics is correct, I recommend that you:

  • Understand the difference between the internal and external measurement systems (starting with the four items above).
  • Learn what you can from each of them. Maybe the Temkin Experience Ratings shows that you are lower with the general population, but your data shows that you are really doing well with high value customers.
  • Improve your internal measurement system. Most companies we’ve seen have significant opportunities for improving their internal customer measurement systems. Make sure the focus is on building a system that drives improvement, not one that just keeps score.
  • Whenever you can, rely on your internal data. Why? Because you can do more segmentation of the results, track changes to specific customers over time, and go deeper into questions about what’s driving the data. These are all things you may need to drive improvements.

The bottom line: Focus on improving, not on reconciling metrics

Analytics Obstacle to Avoid: Forgetting to Be Relevant

Every day, analysts find a myriad of insights that could provide significant value for their organizations. Unfortunately, many (very possibly most) of them are ignored. What’s getting in the way?

In a recent webinar for Clarabridge, I discussed five customer analytics mistakes to avoid. One of the mistakes is “Forgetting to be relevant.” Rather than trying to replicate my entertaining banter, I put together this figure showing an example of the obstacle… and the opportunity to overcome it.

1604_AnalyticsObstacleRelevancyThe key lesson is described in the graphic:

Analytical findings must be translated into meaningful terms for the people who need to take action on the insights.

And remember…

  • Analytics are meaningless unless they lead to action.
  • You need to translate insights into a language that stakeholders understand.
  • People want to know what’s in it for them.

The bottom line: You may need to focus less on the analytics, and more on the business.

CX Metrics: Immature, But Improving (Infographic)

Here are some of insights from the report, State of CX Metrics, 2015.

2016TemkinGroupINFOGRAPHIC_CXMetrics

You can download (and print) this infographic in different forms:

Report: What Happens After a Good or Bad Experience, 2016

1603_WhatHappensAfterGoodBadExperiences_COVERWe just published a Temkin Group report, What Happens After a Good or Bad Experience, 2016. This is our annual analysis of which companies deliver the most and least bad experiences, how consumers respond after those experience (in terms of sharing those experiences and changing their purchase behaviors), and the effect of service recovery (see last year’s report).

Here’s the executive summary:

We asked 10,000 U.S. consumers about their recent interactions with 315 companies across 20 industries, and compared results with similar studies over the previous five years. More than 20% of the customers of Internet service providers and TV service providers reported a bad experience, considerably above the rates for any other industry. Air Tran Airways, Time Warner Cable (TV service and Internet service), Comcast (TV service), and HSBC delivered bad experience to at least one-quarter of their customers. At the same time, less than 3% of Michael’s, Advance Auto Parts, Whole Foods, Publix, Subway, Vanguard, Trader Joe’s, and GameStop customers report having bad experiences. We examined the combination of the volume of bad experiences and the resulting revenue impact and created a Revenues at Risk Index for all 20 industries. At the top of the list, TV service providers and rental car agencies stand to lose at least 6.5% of their revenue from bad experiences. Conversely, less than 2% of the revenues for retailers and supermarket chains are at risk. The companies that recovered very poorly after a bad experience lost sales from 63% of their customers, more than 2.5 times as many as companies that recovered very well. Companies that do a very good job at recovering after a bad experience have more customers who increase spending than those who decrease spending. After a very bad or very good experience, consumers are more likely to give feedback directly to the company than they are to post on Facebook, Twitter, or third party rating sites. Regardless of the channel, consumers are more likely to discuss a very bad experience than a very good one. While the way that consumers give feedback has not changed much since last year, the volume of Twitter usage grew for both positive and negative experiences. Piggly Wiggly, US Cellular, Fifth Third, The Hartford, TriCare, and PSE&G face the potential for the most negatively biased feedback from customers.

Download report for $195
BuyDownload3

Here are excerpted versions of 4 (out of 15) graphics in the report: Read more of this post

Applauding Mobile eGift Card Receiving Experiences

In a recent report, we evaluated mobile eGift card buying experiences using Temkin Group’s SLICE-B experience review methodology. As part of the process, we also received a number of eGift cards. So we took a look at the experience through the eyes of the eGift card recipients. Rather than do an entire experience review, we decided to just give kudos for some of the better practices that we found:

  • Petco includes the sender’s email address with a helpful tip about saying thank you, making it convenient for the recipient to thank the sender.
  • Amazon clearly defines the next steps in the process, telling the receiver how to redeem her Amazon.com gift card in their original email, easing any potential anxiety about how to continue with the process.
  • Macy’s anticipates the receiver’s needs by including a section on featured help topics specific to the receiver, such as; Can I use my Gift Card at Macy’s stores and online, Can I reload my Macy’s gift card, and Macy’s store locations and hours.
  • Jo-Ann demonstrates consistency across the experience through the inclusion of its brand colors and logo on every email, reassuring the recipient that the company is fully connected to the brand.
  • Michael’s appeals to the excitement the recipient feels when she discovers she has a new gift card. The email features phrases such as “special delivery” and “congratulations” at the top of the note, eliciting an immediate, positive emotional response.

Read more of this post

Data Snapshot: Media Use Benchmark, 2016

1603_DS_MediaBenchmark2016_COVERWe just published a Temkin Group data snapshot, Media Use Benchmark, 2016. This is our annual analysis of how much time consumers spend using different media channels (see last year’s data snapshot).

Here’s the data snapshot description:

In January 2016, we surveyed 10,000 U.S. consumers about their media usage patterns and compared the results to similar data we collected in January 2015, January 2014, January 2013, and January 2012. Our analysis examines the amount of time consumers spend every day watching television, browsing the Internet (for both work and leisure), reading books (both print and electronic), reading newspapers (both print and electronic), listening to the radio, reading a print magazine, and using a mobile phone. This data snapshot breaks down the results by income level, education level, and, most expansively, by age.

Download report for $195
BuyDownload3

Here’s a portion of the first figure from the data snapshot that contains 12 data-rich charts. As you can see, over the past five years:

  • Time spent with mobile web/apps has increased the most, followed by using the Internet at work and at home.
  • Time spent with TV, radios, books, and newspapers have declined.

1603_MediaChanges

Download report for $195
BuyDownload3

Report: 2016 Temkin Experience Ratings

1603_2016TemkinExperienceRatings_FINALTemkin Ratings websiteWe published the 2016 Temkin Experience Ratings, the most comprehensive benchmark of customer experience. In the sixth year of the Ratings, we analyze feedback from 10,000 U.S. consumers to rate 294 organizations across 20 industries. Here’s the executive summary:

2016 marks the sixth straight year that we’ve published the Temkin Experience Ratings, a cross-industry, open standard benchmark of customer experience. This year, Publix and H-E-B earned the top two spots, and supermarket chains overall took six of the top 11 spots. At the other end of the spectrum, Fujitsu received the lowest score of any company, closely followed by Health Net. Five other health plans joined them in the bottom 11. To generate these ratings, we asked 10,000 U.S. consumers to rate their recent interactions with 294 companies across 20 industries and then evaluated their experiences across three dimensions: success, effort, and emotion. Publix and H-E-B earned the highest ratings for success, while Publix, O’Reilly Auto Parts, True Value, and Save-a Lot earned the highest for effort, and Publix, Chick-fil-A, and Residence Inn earned the highest for emotion. And when we looked at who had the best and the worst ratings for each industry, we found that USAA actually earned the highest ratings in two industries, while Comcast received the lowest ratings in two industries. Amazon.com, USAA, Holiday Inn Express, and Residence Inn outperformed their industry averages by the most points, while Fujitsu, Motel 6, and HSBC fell behind by the most points. Although all industries declined between 2015 and 2016, rental car agencies and health plans experienced the most dramatic drops. Meanwhile, Coventry Health Care, Con Edison of New York, and True Value improved the most over the last year, and Volkswagen dealers, Fairfield Inn, and Fujitsu dropped the most. To improve customer experience, companies need to master four competencies: Purposeful Leadership, Compelling Brand Values, Employee Engagement, and Customer Connectedness.

Download report for FreeFreeDownloadButton You can also download the dataset in Excel for $395

See our FAQs about the Temkin Experience Ratings.

Also, see individual snapshots of all 20 industries.

The Temkin Experience Ratings are based on evaluating three elements of experience:

  1. Success: How well do experiences meet customers’ needs?
  2. Effort: How easy is it for customers to do what they want to do?
  3. Emotion: How do customers feel about the experiences?

Here are the top and bottom companies in the ratings:

1603_2016TxR_TopBottomOrgs

***See how your company can reference these results or
display a badge for top 10% and industry leaders***

Read more of this post

The Emotional Decline From New Purchase To Customer Service

How do consumers feel about their purchases and subsequent customer service interactions? To find the answer, we asked 10,000 U.S. consumers about those experiences across 11 different industries. We used their responses to calculate the Temkin Emotion Ratings. As you can see below:

  • Across all industries, purchasing provides a more positive emotional response than customer service. The gap in Temkin Emotion Ratings goes from 11 points (health plans) to 49 points (TV/Internet service).
  • New car purchases earn the highest Temkin Emotion Ratings.
  • Customer service interactions with TV/Internet service providers earn (by far) the worst emotion ratings (6%). The next worst emotional experience–health plan customer service (18%)–is three times better than the TV/Internet service providers.
  • Purchasing a new health plan provide the lowest emotional rating of any purchase, but it is also has the smallest gap when compared to the emotional ratings for health plan customer service.

1602_EmotionRatingsPurchaseAndCustomerService

The bottom line: Customer service is an emotional trough.

%d bloggers like this: