That's a great question—thank you for asking!
My answer to your question would be:
- Not to aim for a common CX denominator, since its not the same people selling/servicing the same clients, but instead focus on a common delta for the metric.
- Reduce the specificity of the common metric, for example go from a 7 point to a 2 point scale. This will help alleviate some of the cultural/individual differences. The drawback is that it is less diagnostic.
An example of change, impacted by choosing the appropriate sample. In one of my consultancy projects, the firm was involved in several mergers and acquisitions. We implemented a program where, before taking over day-to-day operations or informing customers about any changes, we would take baseline measurements of NPS, CSAT, and a few other internal metrics. We then measured again about a month after the announcement, allowing us to assess the impact of the brand change. Subsequent measurements were taken at regular intervals over the following months, and after about 18 months, those customers were integrated into the regular pool. We fed these results back to the Customer Success team so they could compare current practices with those in place before the acquisition.
An example of change influenced by time. I worked on a project with a tax software firm that had poor NPS scores from April to June. However, the same customers gave much higher scores in October and November. This highlighted that while the experience of interacting with the software to file tax returns wasn’t ideal, the day-to-day reminders and other integrations were working well. It was a clear signal to focus more on improving the UI/UX of the product.
In summary, whether it’s changes in culture, departments, or even the time of year, the true value of a metric lies not just in its absolute value, but in analyzing how it changes.
thx for thinking along @ahmedA
,
one could even argue that we do not need to identify a metric that is devoid of cultural, religious, political, etc influences to come up with a fundamental human score.
In the end the most important thing is that we ask about their experience, try and understand it in the context of the service/product sold and come up with improvements that will benefit the buyer.
This will not only iteratively improve our own ways of working and the quality of the product or service, but crucially improve the perception and trust of the buyer leading them to buy again or more and ultimately help spread the word as advocates.
This is such a great question, and @SteveBelgraver and @ahmedA I think you’re on to something. On the survey design side, it makes sense to defer to objective questions -- how did you like or dislike it, what would you change, etc. -- and ask people to describe in their own words. (That said, I might be showing some cultural bias there as well.) On the analytics side is where you can categorize responses by geography, language, or other attributes. How does that sound?