How to accommodate for cultural differences when measuring CX? | XM Community
Skip to main content

I think it is fair to say the XM Community appreciates and values the myriad of different cultures around the globe 🌍 A quick look at the top 1000 members reflects this diversity with folks from Germany, Vietnam, USA, India, Columbia , Belgium, Singapore and Denmark to name but just a few! 

Three examples that illustrate how culture influences and determines the customer experience:

  1. NPS does not fare well when the same questions are used in different countries. A company with a +50 NPS in India may have much less loyal customers than a company with a -20 NPS in Japan.
  2. Or take the way how colours are perceived differently for example. In the Netherlands, orange is the national color and associated with the Dutch Royal family while Egypt, on the other hand, associates orange with mourning.
  3. Home Depot's venture into China is a costly example of not taking culture into account when  it had to close all seven of its outlets when the Chinese ‘do it for me’ culture clashed with the US  ‘do it yourself’ culture became apparent.

Considering these entrenched differences is it actually possible to identify & survey a common CX denominator across these different cultures? 

That's a great question—thank you for asking!

My answer to your question would be:

  1. Not to aim for a common CX denominator, since its not the same people selling/servicing the same clients, but instead focus on a common delta for the metric.
  2. Reduce the specificity of the common metric, for example go from a 7 point to a 2 point scale. This will help alleviate some of the cultural/individual differences. The drawback is that it is less diagnostic.

An example of change, impacted by choosing the appropriate sample. In one of my consultancy projects, the firm was involved in several mergers and acquisitions. We implemented a program where, before taking over day-to-day operations or informing customers about any changes, we would take baseline measurements of NPS, CSAT, and a few other internal metrics. We then measured again about a month after the announcement, allowing us to assess the impact of the brand change. Subsequent measurements were taken at regular intervals over the following months, and after about 18 months, those customers were integrated into the regular pool. We fed these results back to the Customer Success team so they could compare current practices with those in place before the acquisition.

An example of change influenced by time. I worked on a project with a tax software firm that had poor NPS scores from April to June. However, the same customers gave much higher scores in October and November. This highlighted that while the experience of interacting with the software to file tax returns wasn’t ideal, the day-to-day reminders and other integrations were working well. It was a clear signal to focus more on improving the UI/UX of the product.

In summary, whether it’s changes in culture, departments, or even the time of year, the true value of a metric lies not just in its absolute value, but in analyzing how it changes.


thx for thinking along ​@ahmedA 👏,

one could even argue that we do not need to identify a metric that is devoid of cultural, religious, political, etc influences to come up with a fundamental human score.

In the end the most important thing is that we ask about their experience, try and understand it in the context of the service/product sold and come up with improvements that will benefit the buyer.

This will not only iteratively improve our own ways of working and the quality of the product or service, but crucially improve the perception and trust of the buyer leading them to buy again or more and ultimately help spread the word as advocates. 


This is such a great question, and ​@SteveBelgraver and ​@ahmedA I think you’re on to something. On the survey design side, it makes sense to defer to objective questions -- how did you like or dislike it, what would you change, etc. -- and ask people to describe in their own words. (That said, I might be showing some cultural bias there as well.) On the analytics side is where you can categorize responses by geography, language, or other attributes. How does that sound?


Leave a Reply