Hello Qualtrics Community! Thank you in advance for contributing your suggestions.
I'm looking for best practice around/ideas questions to ask when creating a survey for our product support team (may be called technical support, customer support for others). This survey would be distributed to a client after someone from the customer/product support team closes out a case with a client.
In particular I'm looking to ask a question that would be roughly phrased:
* Was your issue resolved to your satisfaction? (Y/N)
It would be helpful to know best practice around whether or not this question is usually phrased as a Y/N vs. open-ended. I also am concerned about asking if the issue was resolved to their satisfaction as opposed to just "Was your issue resolved?" or "Was your issue solved completely". Lastly, placement of this question is another concern. Whether or not it is best to place it first in the survey? Or between the already-existing CES question and the CSAT question.
This may be an unusual ask based on the questions I've seen in the community but any suggestions would be greatly appreciated!
Page 1 / 1
Ask it as single select question like below:-
Was your issue resolved to your satisfaction? Yes , No.
If they said NO, ask open end like what can we do to improve satisfaction?
Last you can ask "Was your issue solved completely?" Yes, It is under process, No.
Was your issue resolved to your satisfaction? Yes , No.
If they said NO, ask open end like what can we do to improve satisfaction?
Last you can ask "Was your issue solved completely?" Yes, It is under process, No.
Consider your industry- who are you like-companies, or competition? Try to find relevant benchmarks from them first. That will guide how you want to structure your questions.
If you are working off an entirely different scale than them, you might be able to see improvement over time, but you won't know "is this good compared to my competitors? Where do I stand in the world of X Product?".
I also would remove "satisfaction". I think satisfaction and inquiry resolution should be measured separately. When you add too many measures to one question you might start to get confused "well are they not _happy_ or did we not _solve the question_?". The will correlate, but it helps.
I would also agree with @bansalpeeyush29- most I've seen do the y/n. But I think its always smart to have a follow-up for those who say "no". I also have implemented a system whereby those customers can request follow-up from a manager. We have the survey response automatically sent in an email trigger to the manager if they say "yes" to the manager follow-up. We've been able to save a few transnational that way, and we get more information from our customer on how to improve!
EDIT: I think your question about placement is a really good question, too. and I think it 100% depends on what your KPIs are. Is this the key measure you want to know? I think it can go first. Is Satisfaction the KPI? Put that first. Everything should build around the one thing you _really_ want to know.
If you are working off an entirely different scale than them, you might be able to see improvement over time, but you won't know "is this good compared to my competitors? Where do I stand in the world of X Product?".
I also would remove "satisfaction". I think satisfaction and inquiry resolution should be measured separately. When you add too many measures to one question you might start to get confused "well are they not _happy_ or did we not _solve the question_?". The will correlate, but it helps.
I would also agree with @bansalpeeyush29- most I've seen do the y/n. But I think its always smart to have a follow-up for those who say "no". I also have implemented a system whereby those customers can request follow-up from a manager. We have the survey response automatically sent in an email trigger to the manager if they say "yes" to the manager follow-up. We've been able to save a few transnational that way, and we get more information from our customer on how to improve!
EDIT: I think your question about placement is a really good question, too. and I think it 100% depends on what your KPIs are. Is this the key measure you want to know? I think it can go first. Is Satisfaction the KPI? Put that first. Everything should build around the one thing you _really_ want to know.
I like the answers above. I have another idea which is based on a friutfull discussion I lately had with a PhD for artificial intelligence. The idea is to ask open, e.g. "what do you think about *the last case* (anything which describes *the last case* might increase the respondent's willingness to participate)?" The plain question leads no direction whether it was satisfying or not. Thus, you will receive the "real" reasons / drivers for satisfaction and dissatisfaction, especially when using artificial intelligence to analse your answers.
If you are looking for answers which are spontaneous, intuitive or unbiased, the question should be at the beginning of the questionnaire. Otherwiese, you might get redundant answers which just reflect the CSAT part.
Good luck!
If you are looking for answers which are spontaneous, intuitive or unbiased, the question should be at the beginning of the questionnaire. Otherwiese, you might get redundant answers which just reflect the CSAT part.
Good luck!
On that note, @Pat, I think it's a great idea for survey creators to give the respondent as much data about the transaction as possible. "On THIS DATE you were helped via PHONE/EMAIL/CHAT by AGENT. Did we resolve your inquiry?". The more we can do to help jog customer memory, the better our data is!
Leave a Reply
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.