So I am the Qualtrics survey administrator foe my company, and lately I have seen some instances where I create a survey from scratch, get it all built out and get the logic flows in place where we need them, test it out on my end...and it all works as I built it. And they take a while to build as some of our surveys have upwards of 200+ lines of questions because the survey is built to capture a verbatim response to virtually every single response type there is to a question, thus creating hundreds of logic flow pathways. But I get it working as designed when I test...
But when others go to test it...some questions they DON’T get the desires responses or settings. Some questions that are marked “required” don’t force a reply, some questions marked “allow multiple” only take a single answer, and many more instances.
This is giving my managers and higher ups the impression that I’m not testing things out on my end when in fact I am spending hours or days testing! Has anyone else experienced this happening to them?
Never faced such error. But you can generate test responses to check if all mandatory questions were answered.
Never faced such error. But you can generate test responses to check if all mandatory questions were answered.
Is that a setting or something within Qualtrics that does that automatically? Or is it just that you have to test it and then go see some sort of report?
Hi
You can generate test responses from here:
And then can download data from Data & Analysis Tab and confirm if all the validations were applied or not.
Hope this resolves your query!!
So I am the Qualtrics survey administrator foe my company, and lately I have seen some instances where I create a survey from scratch, get it all built out and get the logic flows in place where we need them, test it out on my end...and it all works as I built it. And they take a while to build as some of our surveys have upwards of 200+ lines of questions because the survey is built to capture a verbatim response to virtually every single response type there is to a question, thus creating hundreds of logic flow pathways. But I get it working as designed when I test...
But when others go to test it...some questions they DON’T get the desires responses or settings. Some questions that are marked “required” don’t force a reply, some questions marked “allow multiple” only take a single answer, and many more instances.
This is giving my managers and higher ups the impression that I’m not testing things out on my end when in fact I am spending hours or days testing! Has anyone else experienced this happening to them?
Have you published the survey before they test? It’s possible in preview mode to select the option to test the last published version only. So you might have made changes, but if they’ve selected to test the published version, then they might not be seeing the latest version (if you haven’t published it).
So I am the Qualtrics survey administrator foe my company, and lately I have seen some instances where I create a survey from scratch, get it all built out and get the logic flows in place where we need them, test it out on my end...and it all works as I built it. And they take a while to build as some of our surveys have upwards of 200+ lines of questions because the survey is built to capture a verbatim response to virtually every single response type there is to a question, thus creating hundreds of logic flow pathways. But I get it working as designed when I test...
But when others go to test it...some questions they DON’T get the desires responses or settings. Some questions that are marked “required” don’t force a reply, some questions marked “allow multiple” only take a single answer, and many more instances.
This is giving my managers and higher ups the impression that I’m not testing things out on my end when in fact I am spending hours or days testing! Has anyone else experienced this happening to them?
Have you published the survey before they test? It’s possible in preview mode to select the option to test the last published version only. So you might have made changes, but if they’ve selected to test the published version, then they might not be seeing the latest version (if you haven’t published it).
As I am testing, I publish the corrections needed as I find them so that I don’t forget. I also don’t just test in preview mode, I send myself the actual survey link to make sure it works as intended and still there seems to be some things that don’t save correctly or work as selected. One of my managers literally asked me “what is it you do during the day” because she found so many errors after I save them as the necessary “Required response” selection, and many other issues.
So I am the Qualtrics survey administrator foe my company, and lately I have seen some instances where I create a survey from scratch, get it all built out and get the logic flows in place where we need them, test it out on my end...and it all works as I built it. And they take a while to build as some of our surveys have upwards of 200+ lines of questions because the survey is built to capture a verbatim response to virtually every single response type there is to a question, thus creating hundreds of logic flow pathways. But I get it working as designed when I test...
But when others go to test it...some questions they DON’T get the desires responses or settings. Some questions that are marked “required” don’t force a reply, some questions marked “allow multiple” only take a single answer, and many more instances.
This is giving my managers and higher ups the impression that I’m not testing things out on my end when in fact I am spending hours or days testing! Has anyone else experienced this happening to them?
Have you published the survey before they test? It’s possible in preview mode to select the option to test the last published version only. So you might have made changes, but if they’ve selected to test the published version, then they might not be seeing the latest version (if you haven’t published it).
As I am testing, I publish the corrections needed as I find them so that I don’t forget. I also don’t just test in preview mode, I send myself the actual survey link to make sure it works as intended and still there seems to be some things that don’t save correctly or work as selected. One of my managers literally asked me “what is it you do during the day” because she found so many errors after I save them as the necessary “Required response” selection, and many other issues.
Only other thing I can think of is you’re internet connection is poor and you’re losing connection after making changes, so they’re not saved - but I think you usually get a warning about leaving the page with unsaved changes. I can’t think what else it could be.
Perhaps, for your own sanity, a couple of things you could do:
- when publishing annotate what the version is, what has been changed.
- download a word version of the survey, so you can compare to previous versions
Good luck!
Only other thing I can think of is you’re internet connection is poor and you’re losing connection after making changes, so they’re not saved - but I think you usually get a warning about leaving the page with unsaved changes. I can’t think what else it could be.
Perhaps, for your own sanity, a couple of things you could do:
- when publishing annotate what the version is, what has been changed.
- download a word version of the survey, so you can compare to previous versions
Good luck!
One of my managers suggested that actually record myself going through and testing it using Zoom to capture everything. Because right now, it is a “he said/she said” scenario. It doesn’t look like I tested it at all, but there isn’t anyway to verify that I did/didn’t test it.
She also asked what the point of running a test response on the required questions in Qualtrics would be because if something is marked as required, the survey won’t go forward unless they are answered anyway. And if something isn’t marked as required, the test won’t pickup on them anyway. lol
One other thing to be mindful of is to check if the testers are actually resuming older survey sessions vs starting new ones. How are you sharing the link with them? Is it an Anonymous link with the “Allow respondents to finish later” setting enabled? If so, and if they are accessing this link from the same device/browser they used to test previously, then they will be resuming their survey session that is reflective of an earlier survey version. This can be prevented by unselecting “Allow respondents to finish later” in the survey options and deleting any responses In Progress. From the support page:
“Qtip: If you’re testing your survey with the anonymous link and you don’t see your edits after publishing your survey, chances are the old version of your survey is cached on your browser. Try clearing your browser’s cache or opening the survey in a new browser.”
If that doesn’t work, I think you might want to have Qualtrics Support take a look at the survey.
Leave a Reply
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.