My survey is being collected through one-on-one interview to guide participants through the questions. In the very beginning I have six simple questions to test the participant's capacity to provide consent (older adults above the age of 75). The participant has three tries to get the entire series of questions right, or they cannot be enrolled in the study and continue.
Right now I have a matrix of those six questions and custom validation set up to provide an error message if the questions are not answered correctly. With this method the person recording the responses is responsible for tracking and limiting the number of times the participant is asked the series of questions. This is workable - but I wonder if I can, after three failed attempts, have skip logic to send the user to the end of the survey.
Any thoughts on this? If I wanted to do that, I think I would have to have a series of three of these matrices, instead of how I have it set up now - which relies on the interviewee to stop the entire process and say "thank you".
Grateful for your thoughts.
-Mahlon
Page 1 / 1
Hi @MKStewart ,
I think you arrived at the solution near the end of your post: create three identical versions of your question matrix.
If respondents fail in answering Version 1 correctly, then Version 2 of the questions will appear (but the respondent will not be able to tell the difference, of course). If they fail V2, then Version 3 appears. If they fail V3, then Skip Logic ends the survey.
I think you arrived at the solution near the end of your post: create three identical versions of your question matrix.
If respondents fail in answering Version 1 correctly, then Version 2 of the questions will appear (but the respondent will not be able to tell the difference, of course). If they fail V2, then Version 3 appears. If they fail V3, then Skip Logic ends the survey.
Hi @MatthewM
Thank you! I guess I'm starting to get the hang of this. And I appreciate your taking the time to answer.
Thank you! I guess I'm starting to get the hang of this. And I appreciate your taking the time to answer.
Hi @MKStewart ,
The suggestion appraoch by @MatthewM should work just fine but thought I'd suggest an alternative as I recently implemented one.
In my case participants have up to 2 attempts at Test 1 and only if they pass it do they continue to Test 2. They can take Test 1 at any point in time.
For that I use an authenticator (each participant has a unique code, but this can also be done with any personalized links) and every time a participant passes the authenticator I increment an embedded data variable attached to that person in a contact list. If that variable reaches 3, then they get rejected from that survey. To update that specific variable in the contact list, every time they finish Test 1, there is a contact list trigger that updates the counting variable associated with their record in the contact list.
Hope that helps.
Nikolay
The suggestion appraoch by @MatthewM should work just fine but thought I'd suggest an alternative as I recently implemented one.
In my case participants have up to 2 attempts at Test 1 and only if they pass it do they continue to Test 2. They can take Test 1 at any point in time.
For that I use an authenticator (each participant has a unique code, but this can also be done with any personalized links) and every time a participant passes the authenticator I increment an embedded data variable attached to that person in a contact list. If that variable reaches 3, then they get rejected from that survey. To update that specific variable in the contact list, every time they finish Test 1, there is a contact list trigger that updates the counting variable associated with their record in the contact list.
Hope that helps.
Nikolay
Leave a Reply
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.