Compare answers of two sliders as exclusion criteria | XM Community
Skip to main content

I am trying to test whether participants are correctly judging two sentences belonging to a sentence pair on comparative difficulty. I'm using two slider-type questions with values ranging from 1 to 100 where 1 has the label ‘Very easy’ and 2 has the label ‘Very hard’. The sentence pairs are presented directly underneath one another on the same page.

In order to make sure that participants understand what they have to do, I have five practice sentences with obviously complicated sentences and an obviously less complicated sentence to match it. I want to make sure that the complicated sentences are indeed judged as being more complicated than the uncomplicated sentences for the majority of the practice block, so that participants can be excluded (and therefore re-directed back to Prolific in order to return their submission) if they consistently make errors.

My questions are therefore as follows:

  1. How would you have the branching structure compare the answer of a question to the value in a different question that's presented at the same time?
  2. Is it even possible to have Qualtrics count the number of times the above condition is met over 5 different questions (i.e., paired sentence ratings), so that I can exclude participants only if they get X out of 5 incorrect? (And if this is possible, how would I do it?)

Currently I have implemented a work-around where I'm just fully typing out the questions rather than using looping, so that I can use a rather complex if-logic branch with all the possible combinations in which 2 out of 3 answers are correct. Doing this with more than 3 slider comparisons would be wildly impractical though, and I have had to scale down my practice/comprehension test block from 5 questions to 3 questions for that reason.

If anyone knows about a more elegant solution I would love to hear about it so that I can implement my original idea more naturally.


I've tried scaling it back up to five questions, and it turns out it's still not working as intended: participants are only allowed to continue if they have ALL questions answered correctly, rather than being allowed to make some mistakes.

Below is a screenshot of the already rather overcomplicated branch logic:

What I'm trying to do with the ‘And’ and ‘Or’ statements is make it so that all possible options of making three mistakes are covered, and in those cases to send participants to an ‘End survey' node. However, as mentioned above, if you even get 1 question incorrectly you're already sent to the end of the survey after the block. Does Qualtrics and/or-logic not correspond to ‘normal’ (e.g., Python) and/or-logic?

Edit: interestingly, it seems to treat the ‘Or’ statements separately instead of as part of a string of ‘And’ statements: it kicks me out if I get the last question incorrect, but it doesn't kick me out if I get the first question incorrect. What is the point of having multiple logic-blocks if it doesn't seem to actually be treating them as blocks?


Another day, another update:

I've dialed the number of questions back down to 3 and ran logic operator tests in Qualtrics (by pre-viewing the study) and in Python. In those tests I continuously go through all possible combinations of questions being answered (in)correctly.

I always use two logic set statements in Qualtrics, which should kick people out if at least two questions are answered incorrectly, those being: 

  1. if Q1 is wrong AND Q2 is wrong OR Q3 is wrong
  2. If Q1 is wrong OR Q2 is wrong AND Q3 is wrong

Both of these can be read in two different ways in Python (as Python usually prioritises AND statements over OR statements), which are here emphasised by using bracket notation:

  1. if (Q1 is wrong AND Q2 is wrong) OR Q3 is wrong
  2. if Q1 is wrong AND (Q2 is wrong OR Q3 is wrong)
  3. If Q1 is wrong OR (Q2 is wrong AND Q3 is wrong)
  4. If (Q1 is wrong OR Q2 is wrong) AND Q3 is wrong

Due to Python prioritising AND statement normally, options 1 and 3 would be the default. This would therefore lead to one table for Qualtrics (as the underlying/default priority assignment remains a mystery to me) and two tables for Python (one with AND to combine the logic sets, with the two possible ways of internal priority assignment and one with the OR to combine the logic sets).

The results for Qualtrics:

The results for Python when combining logic sets with AND:

The results for Python when combining logic sets with OR:

As the above tables show, both the Python default when combining logic sets with AND, and the forced priority of OR when combining logic sets with the OR operator give the desired result. Additionally, NONE of the Python outcomes correspond to the Qualtrics outcomes.

If anyone can elucidate for me how Qualtrics logic operators work, and/or that I have found a bug, I would be very grateful.


After contacting Qualtrics Support I was recommend to use simpler branch logic in combination with embedded data. I.e., to simply create a new variable, e.g., ‘Mistakes’, which counts up by one for every question that was answered incorrectly.

This way you need several branch nodes, but every branch node only contains a single If-statement, namely ‘If Q1 (Q2….QN) is wrong then “Mistakes” + 1’, with a branch node at the very end that sends participants to end of survey if they have a value of more than X in ‘Mistakes’ after the comprehension check block.


Leave a Reply