Follow up question: How are we thinking about using Gen AI to overcome known survey bias / plugging sample gaps?
Curious if experts like @AdamK12, @bstrahin, @JessicaGregory_CHNw, @ArunDubey or @Deepak have thoughts to share on this poll or follow up question? 🧑💻
There is a lot of known survey bias, and I would love to see AI being used to overcome this. One question that I have is how we can train AI to look for and recognize bias, which is an inherently human condition. That being said, addressing those survey gaps is going to become increasingly important over the years!
https://www.forbes.com/sites/glenngow/2022/07/17/how-to-use-ai-to-eliminate-bias/?sh=4199523e1f1f
To eliminate bias, you must first make sure that the data you're using to train the algorithm is itself free of bias or that the algorithm can recognize bias in that data and bring the bias to a human's attention (source: SAP).
I don’t personally have any examples of this from my work, but I am quite keen to see how Qualtrics uses AI in the future around this topic.
I personally think avoiding bias can be achieved by making sure your distribution channels are themselves not unintentionally inviting bias. This may be what is being hinted at in the original question. But making sure that you are segmenting your audience personas properly and leveraging channels to reach those segments is key.
This is something AI might be able to suggest or prompt survey designers on currently! Maybe there is a product idea in there for Qualtrics (building these AI prompts into the product). Just my two cents!
Using a broad range of survey distribution methods is the best way to go. I don’t think there is any way to totally remove survey bias, but making sure you’re funneling your surveys through a wide variety of channels to reach the most diverse audience possible is best.
Working in financial services, we are seeing a narrowing of our survey distribution channels as opposed to increasing them, given both organisational choices and respondents own preferences and concerns for security.
For reducing our bias though our focus is very much on our survey triggers and ensuring we capture at different stages of journeys i.e. not capturing successful journeys only. This often includes looking at our internal data points and considering balancing our survey data with other passive listening (non-survey measures).
I also find organisational culture can drive bias across any survey program and so fostering a culture that is curious and is open to both good and bad results is key to reducing bias through surveying.
Less around distribution, I believe AI detection in the quality of feedback received and where feedback/results may not be representative based on a range of parameters given would be highly useful.
Working in financial services, we are seeing a narrowing of our survey distribution channels as opposed to increasing them, given both organisational choices and respondents own preferences and concerns for security.
For reducing our bias though our focus is very much on our survey triggers and ensuring we capture at different stages of journeys i.e. not capturing successful journeys only. This often includes looking at our internal data points and considering balancing our survey data with other passive listening (non-survey measures).
I also find organisational culture can drive bias across any survey program and so fostering a culture that is curious and is open to both good and bad results is key to reducing bias through surveying.
Less around distribution, I believe AI detection in the quality of feedback received and where feedback/results may not be representative based on a range of parameters given would be highly useful.
@ScottG thank you so much for the detailed reply! I personally would not have thought that narrowing distribution channels would help in this regard, but I fully see your logic and reasoning!
Regarding AI detection, what would be something that should exist in product? Eg., inside the Qualtrics survey platform? Or are you currently using another tool to accomplish this?
Working in financial services, we are seeing a narrowing of our survey distribution channels as opposed to increasing them, given both organisational choices and respondents own preferences and concerns for security.
For reducing our bias though our focus is very much on our survey triggers and ensuring we capture at different stages of journeys i.e. not capturing successful journeys only. This often includes looking at our internal data points and considering balancing our survey data with other passive listening (non-survey measures).
I also find organisational culture can drive bias across any survey program and so fostering a culture that is curious and is open to both good and bad results is key to reducing bias through surveying.
Less around distribution, I believe AI detection in the quality of feedback received and where feedback/results may not be representative based on a range of parameters given would be highly useful.
@ScottG thank you so much for the detailed reply! I personally would not have thought that narrowing distribution channels would help in this regard, but I fully see your logic and reasoning!
Regarding AI detection, what would be something that should exist in product? Eg., inside the Qualtrics survey platform? Or are you currently using another tool to accomplish this?
Thanks @Michael_Cooksey we don’t have a tool that is able to proactively review and monitor the quality of our feedback, which it would be something that be great to see Qualtrics introduce with the investment in AI capabilities.
At present this review and governance is manual outside of platform, with the benefit of having it integrated would keep it consistent and less subjective to analysis of any individual. As tools such as Qualtrics get easier to use & decentralise insight generation, I believe there is a need for stronger programmatic guidance/governance to help maintain quality of insight. Just like Qualtrics helps assess quality of questionnaire, it could also help assess the quality of output.
Working in financial services, we are seeing a narrowing of our survey distribution channels as opposed to increasing them, given both organisational choices and respondents own preferences and concerns for security.
For reducing our bias though our focus is very much on our survey triggers and ensuring we capture at different stages of journeys i.e. not capturing successful journeys only. This often includes looking at our internal data points and considering balancing our survey data with other passive listening (non-survey measures).
I also find organisational culture can drive bias across any survey program and so fostering a culture that is curious and is open to both good and bad results is key to reducing bias through surveying.
Less around distribution, I believe AI detection in the quality of feedback received and where feedback/results may not be representative based on a range of parameters given would be highly useful.
@ScottG thank you so much for the detailed reply! I personally would not have thought that narrowing distribution channels would help in this regard, but I fully see your logic and reasoning!
Regarding AI detection, what would be something that should exist in product? Eg., inside the Qualtrics survey platform? Or are you currently using another tool to accomplish this?
Thanks @Michael_Cooksey we don’t have a tool that is able to proactively review and monitor the quality of our feedback, which it would be something that be great to see Qualtrics introduce with the investment in AI capabilities.
At present this review and governance is manual outside of platform, with the benefit of having it integrated would keep it consistent and less subjective to analysis of any individual. As tools such as Qualtrics get easier to use & decentralise insight generation, I believe there is a need for stronger programmatic guidance/governance to help maintain quality of insight. Just like Qualtrics helps assess quality of questionnaire, it could also help assess the quality of output.
Thanks for the feedback! Much appreciated. I will make sure it’s passed along internally.
I’m just seeing this now and it’s a fascinating topic, especially with hindsight over the past year. I think there are a number of factors that drive skepticism of survey data, including:
- Overuse of surveys to distill insight that could be gathered through digital analytics, observation, or other less intrusive means
- Lack of public understanding of statistical methods and practices, including mainstream media’s inability to correctly assess statistical information and poll/survey results
- Proliferation of surveys that may or may not have statistical rigor
It’s no doubt important for CX practitioners, as stewards of surveys, to use all the tools available to improve distribution and sampling so that results are as credible as possible. However, given that our audiences, in many cases, don’t have much of a background in statistics, we need to explain our results in terms that they understand. For example:
- Shy away from using terms like “statistically significant” or “standard error.” Instead, explain terms clearly. Instead of “statistically significant,” say “these are real differences, and not just based on the sample that we drew or talked to.” Instead of “standard error,” say “the actual results could vary as much as three percent greater or less than what these data say.”
- Don’t overstate results. If there was a statistically significant difference of 1 or 2 percent, is that one or two percent a “big” difference or not?
- Relate results to lived experience or the professional experience of your audience. Be concrete, not abstact.
I’d love to pick up the conversation again and see where it goes!
I’m just seeing this now and it’s a fascinating topic, especially with hindsight over the past year. I think there are a number of factors that drive skepticism of survey data, including:
- Overuse of surveys to distill insight that could be gathered through digital analytics, observation, or other less intrusive means
- Lack of public understanding of statistical methods and practices, including mainstream media’s inability to correctly assess statistical information and poll/survey results
- Proliferation of surveys that may or may not have statistical rigor
It’s no doubt important for CX practitioners, as stewards of surveys, to use all the tools available to improve distribution and sampling so that results are as credible as possible. However, given that our audiences, in many cases, don’t have much of a background in statistics, we need to explain our results in terms that they understand. For example:
- Shy away from using terms like “statistically significant” or “standard error.” Instead, explain terms clearly. Instead of “statistically significant,” say “these are real differences, and not just based on the sample that we drew or talked to.” Instead of “standard error,” say “the actual results could vary as much as three percent greater or less than what these data say.”
- Don’t overstate results. If there was a statistically significant difference of 1 or 2 percent, is that one or two percent a “big” difference or not?
- Relate results to lived experience or the professional experience of your audience. Be concrete, not abstact.
I’d love to pick up the conversation again and see where it goes!
Some great thoughts added to the convo @AdamK12 - and I agree with them all!
Whilst I have a bit of a stats background (not as much as some of my peers) I’ve always seen it as a bit of a double edge sword. People need to be able to trust the data and understand it, having also worked for companies where it’s sometimes encouraged to keep splicing the data to be able to tell pre-determined stories, I think there is always a need for some healthy scepticism too (not just us, but of our stakeholders too).
All of us should be kept honest, and we should welcome healthy curiosity to keep exploring the data too. On our side, we capture any challenges or questions from our senior stakeholders in a hypothesis tree, and do further analysis to prove or disprove - it’s gone a long way to building trust and creditability. It’s also shows, our analysis is never complete - we can always keep looking and find new lens to apply.
I love the practical advice in the language you use, I think they are top tips and really easy to put into practice to (I’m going to steal the standard error reframe).