How to randomize questions without replacement across multiple survey responses | XM Community
Skip to main content
I have a survey that I need to have participants take everyday for a month. They have to rate over 1000 stimuli, which are each embedded in different blocks. I want to have each subject rate a random subset of 30 stimuli each day, but I do not want them to rate stimuli they've rated on previous days. I also want each participant to rate the stimuli in a unique order.



Each subject has a unique study ID, so I was thinking there might be a way to use embedded data/piped text to send them to the set of stimuli they have not yet rated based on their subject ID. I'm pretty lost on where to begin, though.
Hi there! It sounds like a huge pain to set up because of the quantities involved, but this could be done with native functionality.



You basically need to design the survey to fulfill three requirements:

1. Record when a stimulus has been rated

2. Export the records of which stimuli have been rated and then import it again

3. Check if a stimulus has been rated before presenting it



For number one:

To record that a stimulus has been rated, you'll create an embedded data field that indicates that specific stimulus is complete, after each block in the Survey Flow using Branch Logic. For example, if Q1 is answer1 or answer2 or answer3 (etc.), then Stim1 = Yes. Repeat for each stimulus.

Example:

!





For number two:

To save this to the contact level, use a contact list trigger to export all your embedded data fields you've created to mark the stimuli as rated. Then, at the beginning of your survey flow, import all the Stimulus embedded data and leave the status blank so that it imports from the panel or a contact list. You will need to either use individual links or authentication so that this information can be pulled in from your contact list.

Example:

!





For number three:

To check if your stimuli has previously been rated, you'll need to nest branch logic inside your randomizer. Use your embedded data fields and "Does not contain".



Example:

!



Then you should be all set! Holler if you have more questions.
Thanks! This is exactly what I needed, and I think I've successfully implemented all of these steps. I tested it with 4 stimuli and setting the randomizer to choose two each time, and after two survey attempts, there are no more stimuli to be rated! For the contact list trigger, I just used a dummy variable: whether or not a question was displayed, knowing that it will be displayed every time the user takes the survey.



It would be best if my subjects' email addresses are not stored, just for data anonymization, but I see that when you create a new contact (which I'm only using for authentication purposes), an email address is required. Any harm in putting a fake email address (or my own)?



Final question. I am sure the process you outlined will work for my purposes, but as far as I can tell, it will require me to perform each step (branch logic/embed data, branches within randomizer, and triggers for each stimulus) over 1,000 times, since I have over 1,000 stimuli. Is it possible to automate any of these steps?



!

!

!

!
> @kar61 said:

> It would be best if my subjects' email addresses are not stored, just for data anonymization, but I see that when you create a new contact (which I'm only using for authentication purposes), an email address is required. Any harm in putting a fake email address (or my own)?



Unfortunately, yes. If you use the same email address, the answers will all be assigned to you which means the system won't store who has seen what stimuli correctly. If it's not a problem with your IRB, I would recommend deleting the email addresses from the data after the research is complete but before the analysis.



> @kar61 said:

> Final question. I am sure the process you outlined will work for my purposes, but as far as I can tell, it will require me to perform each step (branch logic/embed data, branches within randomizer, and triggers for each stimulus) over 1,000 times, since I have over 1,000 stimuli. Is it possible to automate any of these steps?



Agreed, that's why I said it would be a lot of work. It's a ridiculous amount of coding, for sure! There's no way to natively automate these steps. You could potentially look at a custom code solution but I don't know JS so I'm afraid I can't help you there.
Thanks again! I figured out a way to automate some of this, but I would've been clueless without your help!

kar61 Could you please share how this can be done with many altrnatives (like your 1000 stimuli) - I'm having a similar issue right now and it could be great to see your solution! Thanks!


Hi yinbar,
In principle, doing it for many alternatives is the same as doing it for one. Did you get as far as being able to get all the steps outlined by JenCX to work for a few?

My solution was to download the .qsf file and change the code to include all the necessary branching, contact list triggers, embedded data, etc. I automated the process by having python import and edit the text of the qsf doc. It is a bit clunky, but it works.


Thanks kar61
I actually took a slightly different approach and used a web service to do all the randomization logic - following the advice I got here:
https://www.qualtrics.com/community/discussion/comment/23979#Comment_23979


Leave a Reply