Nuggets of CX Insight from Ian Golding

Yesterday I had the chance to have a quick chat with my friend and world renowned CX specialist, Ian Golding, who took a few mins from his busy agenda to give the community (yet again) some nuggets of CX insight.

I had two questions for Ian:

  1. How can we get our organisations to treat CX as a business discipline?
  2. What is the most important skill of a successful CX professional?

Ian Golding is a member of the CXPA, and one of the few recognised training providers of the CCXP – Certified Customer Experience Professional – certification.

He is also the founder and CEO of CXC – Customer Experience Consultancy, and the author of one of the best CX books in the market – Customer What? – that I definitely recommend.

ROI of CX: How can NPS affect revenue?

Not long ago I shared a blog post entitled Calculate ROI of CX: a simple example where I used Customer Satisfaction (CSAT) as a Customer Experience metric and customer’s Average Spend as a business metric. Recently I was asked about the impact of another popular CX metric, Net Promoter Score (NPS), in the bottom line.

Most of you know that NPS measures the customer’s loyalty to the brand. It measures the customer’s “long-term” happiness, and tries to predict what customers will do in the future. And you also know that NPS is calculated by subtracting the % of Detractors, from the % of Promoters.

Measuring NPS, comparing your NPS to the competition’s, and bragging about a high NPS score might be fun. But in the end, it could be useless if you cannot show your senior leadership or C-suite how it impacts the companies bottom line.

Truth is loyalty means much more. Sure, you want customers to buy your product. But more than that, you want them to buy into your company – your values, your mission, and your care for each and every client – and when they do that, you will see a reflection in your NPS, and you will be well on your way to increased revenue and sustainable growth.

So, how do you prove your board that having a higher NPS impacts revenue and growth positively? You can start by stating that higher NPS scores usually result in 4 very tangible things:

  1. Higher Retention Rates
  2. Increased up-sell and cross-sell
  3. Lower cost to serve
  4. Lower marketing costs (due to word-of-mouth)

But let’s get to the fun part, of calculating the impact of NPS in the company’s revenue. So you can have some data and facts to backup your blurb. For this example I created a scenario of a company with 1 million customers, and used average spend as a business metric.

Let’s say that Promoters represent 54% of customers and spend $500 per year; And Detractors represent 14% of customers and spend $100 per year… NPS would be 40 and the revenue $348m

Now let’s say we were to convert 10,000 Detractors into Promoters… NPS would be 42 and the revenue $352m

Now let’s convert another 10,000 Detractors into Promoters… NPS would be 44 and the revenue $356m.

The correlation between NPS and revenue is obvious. And would allow us to conclude that by converting 3,75% of Detractors into Promoters, would move the NPS needle by 1 point, which would in turn increase the revenue by $2m in a year.

Note: An interesting study from Satmetrix shows that, among the various CX metrics, NPS has the highest correlation to profit and growth. You can also see from the chart below that CSAT seems to have the lowest correlation.

Alt-Tab or ⌘-Tab is not an integration

It was almost 10 years ago that I visited a Tesco Customer Engagement Centre in Dundee (Scotland) and another one in Cardiff (Wales). Tesco is the 3rd-largest retailer in the world. The company turns over more than 60 billion GBP, employs 450,000 people, and (at the time) had almost 20 million Tesco Club Card customers.

Of those employees, 2,000 were customer service representatives (aka “agents”) working in those two locations and from home (c. 300). They were receiving tens of thousands of contacts every day, via phone, email, chat, social media, etc. And despite their high Average Handling Time (AHT) they had a low Customer Satisfaction (CSAT).

i.e. they were neither being efficient, nor resolving customer’s issues.

I had been in several contact-centres before, but this was the first time I realised how Herculean was the task performed by customer service reps. Each one of them had a phone, a headset, a keyboard, a monitor. And in that monitor I counted circa 15 different applications opened. One of them was a web-browser with several tabs open.

Among the applications were: CRM (home-grown), Commerce (from Oracle), Fraud & Finance (undisclosed), ERP (various home-grown and off-the-shelve), Chat (from Bold), Telephony (from Cisco), CTI (for telephony), Workforce Management (from Verint), Email (MS Outlook), Collaboration (MS Lynch), Knowledgebase (various wikis and MS SharePoint pages), Scanning (MS Document Imaging), and more apps like a Notepad (and all of them also had a physical notepad and pen by their keyboard).

And in the web-browser, the various tabs had opened the various Tesco websites (for clothing, wine, groceries, mobile, etc.), Tesco internal portals, Google, and at least 9 different tabs for delivery company pages like Yodel, Hermes, Mojo, DPD, Ceva, Metapack, Middlewich, Click-Spares, FIRA.

I sat down with a few of their agents, watching them deal with customer contacts. I could not believe the amount of effort they had to put, only to reply to a question that had a straightforward answer. And the unbelievable pain they had to go through when the enquiry was not simple to resolve.

And I noticed in their keyboards, how the “Alt” and “Tab” label had disappeared from those keys. Such was the amount of time they flicked through screens. It was actually difficult for me, at the start, to keep up. My eyes were aching – and I was only watching, not even trying to read a thing.

The truth is almost 10 years later, many companies still work like this. And research has shown that 20% of customer service agents time is spent searching for data in the various siloed systems (be it customer, transactional, or operational data, as well as knowledge to resolve queries).

Companies need to have different systems to store and process different types of data. And companies need to have different applications to manage and analyse that data. Actually, the bigger the company, the likelier it is the need to have a complex tech-stack and architecture.

However, what companies don’t need, is to ask their front-line employees to go through hell, logging into and using all those systems, whilst on the phone with a frustrated, hopeless, or angry customer. Agents need to focus on empathising with the customer and focusing on resolving the problem.

Customer service teams need ONE simple and easy-to-use application / user interface that provides:

  1. unified conversation-focused workspace
  2. channel-agnostic workflow
  3. quick and easy channel-switch
  4. contextual knowledge at the fingertips
  5. interface to surface data from back-end systems

Off-the-shelve software applications already offer most (if not all) of the above. The challenge doesn’t lie with technology. On the contrary, technology is available to resolve that challenge and support the needs of companies, employees and customers.

What companies need to do is stop thinking that Alt-Tab or ⌘-Tab is an integration and invest in providing their employees the one tool that will allow them to become more efficient and effective, ultimately delivering a better customer service and experience.

6 ways to better setup IVR, the phone villain

IVR (Interactive Voice Response) is probably the acronyms that annoys us, customers, the most. I have had countless experiences where it frustrated me to the point that I stopped doing business with companies.

Recently, one of my parcels was delayed. I had been promised a 3-day delivery and 8 days had passed. I called the company from which I bought the product, and they told me to contact the courier.

This in itself is a very good example of bad customer experience – as they should call the courier they work with (not me!) and get me an update – but that is not the point of this post.

It was a nightmare to find a phone number. It took me several minutes digging into the courier’s website. Another example of bad customer experience – but, again, that is not the point of this post.

It was a Spanish courier. I was greeted by a Spanish-speaking IVR (lucky I’m Portuguese, so I’m able to understand) that asked 3 questions (department, reason, parcel number) before asking for a post-code.

I entered my post-code. The IVR said “Sorry, this is not a valid 5-digit Spanish post-code” and hung up. I was shocked and furious. Three things were wrong with this IVR setup:

  1. The company delivers abroad (in Europe) and only accepts Spanish post-codes
  2. The call is dropped without giving another chance to get post-code right
  3. There is no option to skip post-code and talk to a live agent

Having been around for a while and knowing how these systems work, I called again, gone through the first 3 questions, and when I got to the post-code question I entered “12345” (yes, I know… smart ass).

Any guesses?… of course, it accepted and got me through to the live agent, who quickly clarified why the parcel was delayed.

It doesn’t need to be like this. IVR doesn’t need to be the villain of the phone channel. IVR is a great technology that can enable great customer experiences, while making companies much more efficient.

IVR allows customers to interact via keypad or speech recognition. It can provide quick (pre-recorded) answers to our questions, avoiding wait. It can also put us through to the right agent / department, avoiding hand-offs.

From my point of view, the problem with most IVR we experience is twofold:

  1. IVR menus are often designed with a focus on the internal process and workflow. It makes things easier for the agent who picks up the phone, at the expense of customer effort – which is a massive driver of dissatisfaction and disloyalty.
  2. IVR menus are often poorly configured with an over-engineered setup. It uses complex features, that try to cover all possible scenarios, but makes it very painful and frustrating for the customer to navigate, oftentimes becoming a labyrinth without an exit.

So, in order to use this technology right my advice is:

  1. Design IVR trees with a customer-centric approach, and use layman’s terms and language;
  2. Keep IVR menus as short and simple as possible, with a maximum of 5 options (ideally 3), even if that requires agents asking additional questions – believe me, when it comes to the phone channel, that is much better than leaving customers lost in an IVR maze.
  3. Do not squeeze into the IVR menus irrelevant information or marketing messages (special offers, campaigns, etc.) – customers called you because they need help, not for you to try and shove another product down their throats.
  4. Monitor the IVR carefully and frequently, to check how customers are navigating through it, and if they are landing where they are actually going to be helped most. Check IVR route versus reason for calling.
  5. Allow customers to either correct their answer (if it is deemed invalid) or to go back and amend the previous answer.
  6. Always provide a shortcut to a live agent, even if that means compromising routing or personalisation. Customers who select this option will know they cannot expect agents to guess who they are and why they’re calling about.

The “I just do what I’m told” experience

Empathy. Accountability. Ownership. Three things that are absolutely crucial for the delivery of great customer experiences. More than that, they should be core to every relationship and everything we do in life. Still, most so-called “Customer Service” people (from senior leadership to front-line agents) gets it wrong.

I bought a flat, in Portugal, and had to contact the utilities company, EDP – Electricity of Portugal, to change the account details (from the previous owner to my name).

The agent I spoke with was really nice and attentive. Told me I had to provide them with document A, and followed up on the call with an email explaining what that document was and where I should send it to.

A few days later I got a call from EDP. An agent told me someone needed to come by to do a (paid) service, due to the lack of documentation. I explained that I had sent it a couple days earlier, via email following instructions provided by her colleague. She wasn’t aware “I’m in a different department and just do what the system tells me to do”.

The day after next, I got another call from EDP. An agent told me someone needed to come by to do a (paid) service, due to the lack of documentation. I explained the same thing. He told me that actually I needed to send document B as well. I wasn’t aware, and asked why wasn’t I informed earlier. “I cannot take responsibility for my colleagues actions. I’m just doing what I have to do”.

The agent also told me that until the process was complete, the system would “flag” every other day and someone would call me, regardless of the case being in progress. “Ok”, I said, “that doesn’t make much sense, but it will only force me to repeat myself to every agent that calls”. I guess that didn’t bother them much.

A couple of days later, after I had sent document B, I got another call from EDP. An agent told me someone needed to come by to do a (paid) service, due to the lack of documentation. I explained the same thing. She told me that actually I needed to send document C as well. I wasn’t aware and said it would be appreciated if they could ask for all documentation at once. “I’m not responsible for what others told you. We are in different teams. I’ve got to do what I’ve got to do”.

I was out, and asked her if she could send me an email explaining in layman’s terms what document C was – as her language was too technical on the phone. The agent replied that she could not send me an email. I asked why. “Not my department”, she said.

I asked her if she could please send an internal note asking for the relevant department to send me an email. “I can give you their phone number, and you can call to ask them to send you an email”. I was flabbergasted. Asked her if she thought that made sense. “It’s not my responsibility, I just do what I’m told”, she said.

At this point I started telling the agent that, from a customer experience point of view, this wasn’t good, and that… she interrupted me “I’m sorry sir, that is not relevant. Consider yourself warned, on this call, that we called you to ask for document C. Is there anything else I can do for you today?”

That’s all”, I said, “I don’t think you can help me with anything else. At least not today. Maybe one day”.

Guest Post, by Stephanie Thum, CCXP

Back Up Your CX Leadership Strategies with Data from These Five Academic Studies

“Expert” opinions from blogs, books, and podcasts are great. But there is just something about being able to build on what you believe to be true by leaning on data from published academic research. In the customer experience profession, we now have the benefit of hundreds of published academic studies that apply to our world. I recently began integrating more scholarly readings on customer experience into my work. Here are my top five favorites from recent months. Side benefit: they are all free to access! See if you think they might be worth putting on your reading list.

  1. A Phenomenological Study of Customer Disvalue

If good experiences equal customer value, then how do we start to understand negative customer experiences and the reverse phenomenon—customer disvalue? Disvalue is a separate, deeper phenomenon than customer dissatisfaction. Disvalue is about the lasting impressions customers have of doing business with a company that just really lets them down. This study describes disvalue phenomena and hints at how customers might deal with the situation, including protests, revenge, and telling others about their experience. (Free.)

Source: The Iranian Journal of Management Studies, Volume 13, No. 3, Summer 2020, pages 367-390.

  1. Narcissism, interactivity, community, and online revenge behavior: The moderating role of social presence among Jordanian consumers

This study is a good companion read to study on customer disvalue mentioned above. Researchers found that customers’ personal levels of narcissism and their social media participation and presence increased their intentions and desire to enact revenge on a brand after a bad experience. The key takeaway: customers can get pretty cranky no matter how hard you try. Companies need to be aware of and prepared for the pitfalls of perceived poor customer experiences. (Free.)

Source: Computers in Human Behavior, Volume 104, March 2020.

3. Virtual team leader communication: Employee perception and organizational reality

In this study, virtual teams that thought their leaders were excellent communicators believed their teams were performing well. But when researchers compared the teams’ subjective perceptions of their performance to objective data on an organizational balanced scorecard, virtual teams were not performing as well as they thought. The takeaway: leaders need to use great tools and speak with clarity regarding performance avoid being misinterpreted. (Free.)

Source: International Journal of Business Communication, Volume 57, No. 4, October 2020, 452-473.

4. Design for Service Inclusion: Creating Inclusive Service Systems by 2050

Despite recent efforts by the business world to be inclusive of customers with disabilities, exclusion and discrimination are still problems. Service exclusion exists in cultures where employees treat vulnerable customers in discriminatory ways, instead of with empathy or proactive service. This article explains the ins and outs of service exclusion. It calls attention to barriers to change, like the reality that the cost of lawsuits is still oftentimes less than the cost of change. It offers success strategies for improving experiences for all customers. (Free.)

Source: Journal of Service Management, Volume 29, No. 5, July 2018, pages 834-858

5. Do you respond sincerely? How sellers’ responses to online reviews affect customer
relationship and repurchase intention

Customers get annoyed when you respond to their online reviews with shameless self- promotion and a plug for your next sale. To them, it comes across as indulging in your own self- interest, rather than accepting their feedback. This study found that when that happens, relationship quality and customer repurchase intentions decrease. Read this to understand how you should you respond so that customers feel valued and ready to buy again. (Free.)

Source: Frontiers in Business Research in China, Volume 14, No. 1, December 2020, 367-390.

Hundreds of academic articles have been published in just the past 15 years alone that apply to our work in customer experience leadership. There is room for more, so keep watching for the best, most applicable studies that carry with them the rigors of peer reviewed, scientific evaluation. Google Scholar is a great starting resource!

Feel free to comment with some of the most helpful academic resources you have written, contributed to, or found helpful.

Stephanie Thum is the Founding Principal at Practical CX, LLC, having served as one of the first agency-level heads of customer experience in the U.S. federal government. During that time Stephanie advised President Obama’s interagency task force on customer experience. She also served as Chief Advisor for Federal CX at Qualtrics, and CX Influencer for SAP.

Stephanie is also a founding member of the CXPA, where she helped build the Certified Customer Experience Professional (CCXP) certification process and the global customer experience professional community.

Follow Stephanie on LinkedIn, Instagram, Twitter.

4 principles for modern VoC collection

pic from

In this week’s The Modern Customer Podcast host Blake Morgan had an interesting chat with Tom Hale, the president of SurveyMonkey. One of the topics they discussed was around creating the perfect survey experience, and what metrics to use.

This is something I believe many companies struggle with, and CX professionals try to get right, amidst the various interests and requests of the different stakeholders and forces within their organisations.

From my point of view, when it comes to setting up a modern framework and system to collect voice-of-customer, there are 4 simple principles to follow:

  1. Customer-centric design – gather feedback when it most matters to the customer. Tom’s example in the podcast is a good one. He received a treadmill, and while struggling to assemble it he was already being peppered with surveys to gather feedback on the delivery – when the company should have been focusing on ensuring he was alright setting things up and making it work.
  2. X-data effortless collection – gather feedback in a way that makes it easier for the customer. Survey questions need to be simple to read, easy to understand, bring back the experience in question, and have answers that are easily associated with the customers’ judgement. Stephanie Thum‘s example in this tweet illustrates it well – the question was: “If needed, would you use this service in the future?“, response was “Very Satisfied” to “Very Dissatisfied“. Makes no sense, right?
  3. Embed in the experience – gather feedback where it’s more convenient to the customer. Wherever possible, but only if it doesn’t disrupt things (!), ask for feedback in or during the experience. Rather than diverting customers somewhere else straight after the experience, which can seem a hassle, or sending them an email / message a few hours or days later – by then, they may have already lost the excitement or memory of what happened.
  4. Focus on actionable insight – gather feedback that induce change and drive improvement. It’s important to collect a global indicator of the outcome of the experience, and whether it was effective (e.g. CSAT question). And it’s also very important to ask for the detail, and understand the “why” (e.g. open text question). But these are the customer’s perceptions, which you cannot change. Hence, it’s even more important to ensure you understand what impacted the customer’s perception. The things or areas for which you can identify owners within the company, and push for change (e.g. drivers question).

There are many other steps to follow, but I wanted to KISS you (keep it short and simple 😊). If you work on these 4 principles, you are setting yourself up with a good foundation for collecting good quantity and quality of Voice-of-Customer data.

Guest post, by Ben Motteram

Useful CX Metrics You May Not Be Using

There are at least two very good reasons to measure the experience you’re providing customers.

The first is best summarised by the father of management thinking, Peter Drucker, when he said “if you can’t measure it, you can’t manage it”. As CX Managers we need to understand the experience we are currently providing customers in order to transform it.

The second has to do with your CX strategy. Any strategy worth its salt will be comprised of three components:

  1. An understanding of where you are today,
  2. The desired future state, and
  3. A plan for how you’re going to get there. 

And metrics are crucial to building out your understanding of your current position.

If you Google “CX Metrics”, once you get through all the ads for feedback vendors, you’re going to quickly find that most people like to talk about Net Promoter Score, Customer Effort Score (CES) and Customer Satisfaction (CSAT). All three have their pro’s and con’s but when used as part of a system they’re all good. 

But there’s enough posts about them on the internet (hell, I’m guilty of even publishing one or two) so I’m going to look at a few metrics today that you’re probably not using.

Now, the metrics you use to measure your CX are going to differ depending on the type of business you are. An e-business that operates 100% online will need different metrics to a physical store which will need different metrics to a national cable company with a call centre and technicians visiting customers in their homes daily. 

So let’s look at each of those examples in turn:


In this scenario, customers order a product online for it to be delivered so there are two aspects of the experience we can measure: the online experience and the delivery experience. Metrics I’d be looking at for each include:

The online experience

  • Webpage uptime – What was the percentage of time that customers could not access our website? When did those periods occur?
  • Bounce rate – What was the percentage of visitors who came to our site and then left rather than continuing to view other pages on the site?
  • Abandoned carts – What percentage of customers began shopping and then stopped mid-purchase? What page were they on when they stopped? If customers had logged in prior to abandoning their cart, what was the general demographic profile of customers who abandoned their carts?
  • Page load time – How long did it take each page to load causing our customers to wait?
  • How many times was the FAQ guide accessed? This indicates customers weren’t able/didn’t know how to do something.
  • How many times was an online agent requested mid-purchase? Again, this indicates customers weren’t able/didn’t know how to do something.

The delivery experience

  • What was the average time between a customer ordering the product and receiving it for both metropolitan and regional areas?
  • What percentage of deliveries were made after our commitment date?
  • What percentage of returns on the first day were because of damage (which we will assume was caused by delivery)?
  • What percentage of customers were notified that the product was being delivered on the day?
  • How did customers rate the delivery person? How many complaints were received about delivery people?

Physical Store

Owners of a physical store are going to need a completely different set of metrics to measure their CX. I’ll break these down between the store itself and the service provided by employees within the store:

The store

  • The shopping experience begins before the customer enters your store. If they drove, how easy or hard was it for them to find a park?
  • Did we have the product(s) the customer was looking for?
  • How easy/hard were those products to find in our store?
  • How did customers rate the general appearance of the store for cleanliness/tidiness?

The customer service

  • How did customers rate the employee(s) they interacted with whilst on site for appearance, service, courtesy, knowledge, communication and professionalism?

National Cable Company

In this scenario, I’m using a cable company because I’ve worked at a few of them and know them well but it could be any company with a contact centre that sends technicians to customers’ homes or businesses. The two aspects of the experience I’ll focus on here is the contact centre and the technician visit.

The contact centre

Every good contact centre will already be measuring things like FCR, AHT, ASA, and QA (boy there’s a lot of acronyms in the contact centre world!) so let’s look at some other metrics:

  • Average After Call Work Time – A subset of AHT, this is the average amount of time it takes to wrap up a case after the customer has disconnected.
  • Unplanned agent leave days – people not turning up to work when you’d planned for them to be there affects CX.
  • Agent Turnover Rate – What percentage of agents leave each year? Not only will this increased hiring and re-training costs but less experienced agents can’t provide the same level of service to customers that an experienced, knowledgeable agent can.
  • Escalation Rate – What percentage of cases were escalated both from self-service to a live agent, as well as between different tiers of agents and managers. More agent-to-manager or cross-tier escalations may indicate expertise or confidence issues with the service agents, particularly among those agents with the highest rates. A high escalation rate from self-service to live agents could mean current self-service options are not effectively answering customer questions.

The technician visit

  • Right First Time – Did the technician do the job they were originally sent to do or was rework involved?
  • Call On Approach – Did the technician call the customer before arriving to let them know they were coming to ensure the customer would be there?
  • Appointment Window Met – Did the technician arrive within the specified appointment window?
  • How did customers rate the technician they interacted with whilst they were on site for appearance, service, courtesy, knowledge, communication and professionalism?

In all cases mentioned above, I’d also be measuring complaints and the time it took to resolve them. Complaints are a key indicator that your customer experience has broken down and Time To Resolve tells you how long it took to fix. As a CX Manager, your goal should be of course, to get the first metric down to 0 and the second as low as possible.

So there you have it, some of the more uncommon metrics that can be used to measure customer experience. If you’re using any others I’d love to hear about them. Please add them in the comments section of this post.

“Measurement is the first step that leads to control and eventually to improvement. If you can’t measure something, you can’t understand it. If you can’t understand it, you can’t control it. If you can’t control it, you can’t improve it.” – Dr. H. James Harrington.

[Image courtesy of Patricia Serna on Unsplash]

Ben Motteram is a customer experience consultant with over 20 years’ experience in customer acquisition and retention. Through his company, CXpert, he helps companies become more human in the way they interact with customers and employees to increase loyalty, engagement, and ultimately profits. An avid golfer living in Melbourne, follow Ben on Twitter for insights on CX, customer service and employee engagement or connect with him on Linked In.