Industry Best Practices for Combining VOC Survey and Product Reviews | XM Community
Skip to main content
We have a transactional VOC survey, and are looking at adding the product review (Bazaarvoice) at the end of the survey - for those customers would like to leave us a review. I have seen a few examples of this in the wild, but most of them have been for hotels. From a functional perspective, it appears that we can do this via iframes within the Qualtrics survey - but I'm wondering if there are any best practices, specific to the survey flow, that could help influence us.



Typically I see the review at the end of the survey, but we are wondering how much it matters if it's at the end or the beginning? Any thoughts, ideas, or best practices would be greatly appreciated.



Thanks,

Allen
I wouldn't use an iframe.



You can if you have some specific parameters that will make that easier- I don't think there is anything WRONG with iframes. But IMO iframes can be a little tricky; you have to be careful about how they display, they don't always look the same in all user experiences. And the user could be navigating through several pages on the other website, leaving lots of room for failures and hiccups.



I like how Qualtrics themselves sends me their customer satisfaction surveys after I contact support. They send a survey, and if you trigger high enough satisfaction scores, it asks for a review. So you can set up whatever rules you want to in the survey flow to trigger the "product review" End of survey element. Then on the thank you page, have a link to the sites you want reviews on.



It's quite clever- you could specifically send you're NPS "Promoters" the tools to do what they already are prone to do.



https://www.qualtrics.com/support/survey-platform/survey-module/survey-flow/standard-elements/end-of-survey-element/
Thanks @Kate - this is a great suggestion and very helpful. In additon to this flow elemnt, are you aware of any industry best practices or whitepapers around this practice? We're trying to better understand the risk/reward of combining these two different pieces of feedback into a single 'survey'.



Thanks again for this response,

Allen
@aurness I would be doing a google search same as you 🙂



In my opinion as someone who runs a CX program for a company, though...

I think it's smart to be careful about how you use it. You want to only recommend the review site to people who are happy.



If someone says they are upset with your service or product, then you want to figure out how to make it better. I _don't_ want to tell them "I see you're unhappy- consider rating us on G2!!". You _do_ want to send them something that says "I see you're unhappy- do you want to talk to a manager so we can make it right?".



But this should be easy with a survey flow to set up. If they trigger everything as "happy" or "satisfied", give them one thank you page. If they are upset on even 1 question, ask to help them.



That's really the only risk I can see. Low risk, very high potential for reward.
@Kate - thanks for the quick follow up. While I love to search the google, it does not always reap the benefit's I desire.



The survey flow is a great solution, which I was exploring. I agree we don't want to send a unhappy customer to review us, but instead we should focus on recovery/resolution - yet I get nervous when there could be the perception of only 'cherry picking' our positive reviews. It's like those app rating prompts - if you love us rate us (on app store), but if not - give us feedback (outside of app store).



This is where the root of my question lies, if both Satisfaction and Reviews are combined - are we going to wrongly skew the data...one way or the other. Trying to figure out the balance between data collection, transparency, and action.
IMO, the question you're asking is an ethics question. And I think it's a super smart question to ask. But I also think that's something you and you're company have to decide- it's not something that currently exists on a case study or whitepaper.



You are definitely skewing the _reviews_ and gathering more positive reviews than you would otherwise. But... there is nothing illegal about the action. You are not skewing your _actual_ satisfaction data. So this behavior wouldn't bother me, personally. I would be ethnically OK with doing this. It's my job to sell product/services from my company. And this is a great tactic to do that. You're not telling people what to say. They can be promoters and still give you a four out of five review. All you're doing is giving them the vehicle to tell people about their experiences if they choose.



Maybe this would be a good blog post for you to write and post on Medium or something.



(PS: Feel free to keep the question open, I'd love to hear what others have to say. I think it's an interesting question, I'm not just responding for the "solution" points, haha).
@Kate I'm with you - I think a custom EOS element is ideal, but what if the question is at the beginning of the survey? e.g. My survey has NPS as the first (and most important) question, followed by some others that are less important. I want my promoters to get the custom "please review us on google" message after they've taken the entire survey.



I'm just not sure how to set a condition (i.e. if NPS response is 9 or 10, show custom EOS message) without terminating the survey prematurely. Any feedback & support links appreciated!
hi @jdiffee and @aurness



i just implemented this exact idea (suggested by @Kate) across multiple brands. to confirm, there is no ethical issue with this outside of your own business. review sites (referring to product review at the moment) ask reviewers if they were offered an incentive to leave a review. this is a widely accepted practice and what is being discussed here is certainly a step down from that.



as for implementing this... i added my custom end of survey message into my library with a link out to the review site (embedded in a button to make it pretty).

on the survey i then added a 'Branch', which is where you add the logic the custom end of survey message abides by. for instance, the survey would only branch if 'promoter' and if hit certain other question responses i desired. i also used some embedded data to determine which brand page on the review site i wanted these to go to.



inside that branch you can then add an 'end of survey' and customize this. i then chose to display the custom message from my library if the conditions of the branch are met.



this is all completed via the survey flow and took a matter of minutes for me to set this up. obviously you can also test this via survey preview and confirm across multiple browsers and devices.



also, the results are brilliant!! i definitely recommend. take up rate is quite low (3 - 5%), but this is where other solutions (like iFrame) could then become an option for me down the track. but as a start, this is the way to go!
In response to @jdiffee:

Your survey flow isn't dependent on the last survey. You can totally customize for the first question at the end of the survey as well!



I agree totally with @Ben_J101 and wanted to share a screenshot of what that might look like...



!
Thanks for the clarification @kate and @Ben_J101 - I did exactly this and it worked perfectly. Brilliant!
UPDATE: The above worked great for sending all "Promoters" (users that scored NPS of 9 or 10) the same End Of Survey (EOS) msg. The problem is that it's for a regional company with branches that operate independently, and have different Yelp review pages for each branch. The old EOS message had over 20 links to different review pages, forcing the user to pick the right one. e.g. _"Thanks for the great score, please post a review on one of these 20 different review sites..."_



NEW CHALLENGE: Use embedded data (i.e. "Sales Office") to identify which branch the user purchased from, and give them an EOS msg that encourages them to post their review on the corresponding Yelp review location. e.g. _"Thanks for the great score of Sales Office X, please post a review. Here's a link to the Yelp page for Sales Office X."_



SOLUTION: Simply create a separate EOS msg for each branch and save in message library. Use branch logic: "If promoter = true, AND Sales Office = X, then show Sales Office X EOS msg."



!



This seems simple enough - but I'm not sure how to test it. Perhaps this is a total newbie question, but how do you test or preview the branch logic of embedded data? In other words, how do I fool Qualtrics into thinking I'm from Sales Office X?
hi @jdiffee



you are on the exact right track here. i actually have my embedded data determine between a couple of different review sights and i have built the particular links into their own individual EOS message as you have mentioned. it works perfectly without a problem



as for testing.... all you can do is kick off a mock distribution. you can create these manually in Qualtrics. if it is only 1 x embedded data field you are going to use in the branch logic then you dont need to worry about all the other ones you may have.



you can create a copy of your survey and then test the logic via the mock up distribution on the test survey rather than the production version



as you can tell, i had the same issue :)



thanks

Ben
> This seems simple enough - but I'm not sure how to test it. Perhaps this is a total newbie question, but how do you test or preview the branch logic of embedded data? In other words, how do I fool Qualtrics into thinking I'm from Sales Office X?

>



@jdiffee

I would go a simpler route if you're only testing one. Use the preview link, and append the embedded data to the URL query. In your case:

```https://BRANDXXXXXXX.qualtrics.com/jfe/preview/SV_XXXXXXX?Q_SurveyVersionID=current&Q_CHL=preview&Sales_Office=Akron```



Documentation:

https://www.qualtrics.com/support/survey-platform/survey-module/survey-flow/standard-elements/embedded-data/#SettingValuesFromTheSurveyURL
@Ben_J101 and @Kate you both rock! I reached out to support and they recommended the query string approach as well - thanks for the example with syntax. I considered the mock distribution method, but am glad there's a simpler way. And I'm much more confident now after hearing that it worked perfectly for you @Ben_J101.



This thread topic seems like something many other Qualtrics VoC users would be interested in. Thanks again for the insight!
no worries @jdiffee



I have actually referred a few other people on the community to this thread already. I'm glad its recorded!



good luck!

Ben

Leave a Reply