Auto-closed and Proactive Support Cases | XM Community
Skip to main content

Auto-closed and Proactive Support Cases


Forum|alt.badge.img+1

Hello All,

I manage our Support Cases Satisfaction Survey’s that are sent out after a case is closed. We are running into a challenging question around whether we should still send a survey for a proactive case or an auto-closed case. 

  • Proactive cases are opened on behalf of a customer based on a trigger within their own product usage. This type of case warns them of risk within their system. 
  • Auto-closed cases are closed when a customer has not responded over a certain period of time. 

These types of cases are hurting our response rates and on occasion our CSAT scores. How do you handle this type of audience within your organization? Why did your organization make the decision to send or not to these audiences?

5 replies

Nam Nguyen
QPN Level 8 ●●●●●●●●
Forum|alt.badge.img+28
  • QPN Level 8 ●●●●●●●●
  • 1090 replies
  • February 7, 2025

Hi ​@Emilyp 
About hurting the CSAT scores, what is the CSAT that you’re measuring about? If it’s about the support experience, there are 2 things that I think you should consider:

  1. A meaningful interaction between customer and support team/resources should be the trigger for survey. So for example: there’s a support call or they click the FAQ link that you provided when you warn them…
  2. Frame the survey and make sure the customer know that this is just about the support process, don’t let the Issue affect your CSAT score. Respondents sometime will not read the question carefully so there’s also another clever way of solving this like my client. She put 2 questions on the same page: 1 is about the severity of the issue, 2 is the support CSAT so that customers will have a place to pour down their emotion and clear the issue out of their mind before moving on to the Support CSAT.

Hope this help. Let me know if you have any thought on this


vgayraud
QPN Level 5 ●●●●●
Forum|alt.badge.img+48
  • QPN Level 5 ●●●●●
  • 357 replies
  • February 7, 2025

Hi ​@Emilyp 

While the impact of that kind of cases on the response rate is probably not avoidable, I don’t think you should necessarily shy away from cases that might lower a CSAT score.

There might be real and meaningful reasons behind those lower scores, from which you might gain useful and actionable insights.

You can also always fashion your reporting tools so that they separate or not the results from those cases.

 

Best,


Forum|alt.badge.img+1
  • Author
  • 1 reply
  • February 7, 2025

@Nam Nguyen and ​@vgayraud Great insight! Agreed that we want to see the impact of our support experience and can separate that in reporting or with additional clarity on the case type.  


SteveBelgraver
Level 5 ●●●●●
Forum|alt.badge.img+53
  • Level 5 ●●●●●
  • 101 replies
  • February 10, 2025

Interesting question ​@Emilyp !

As my company supports international enterprises my view is a B2B perspective which can be quite different from a B2C interaction. On a semantic note this usually means we talk about the 'client' as opposed to the 'customer'. 

That said, the survey responses from cases raised by the client (A) are probably the most valuable, followed by cases we raise on their behalf (B) with the ones sent based on cases raised by our event monitoring tool (C) bringing up the rear. 

In our case surveys are sent out for all 3 scenarios though the event based ones (C) are only those that subsequently get converted into an Incident following analysis by an engineer. Although I'm not aware of any analysis having been done on this I'd be tempted to say the surveys results from cases raised by the client would be the most valuable.    

Finally on the response rates where our aim is to achieve a 20% target for the Transactional surveys (as described above, additionally we do annual relationship surveys and ones following project closure along with several other survey types). We have Medallia configured for an internal reminder to go out to the Service Delivery Manager for survey requests that haven't been answered after 7 days. It remains a tricky balance between ensuring a minimally 1 in 5 response rate versus survey fatigue where more reminders will only serve to turn clients off from responding.. 


Forum|alt.badge.img+32
  • Level 6 ●●●●●●
  • 240 replies
  • February 17, 2025

@Emilyp I would agree with earlier comments in the importance in continuing these measurements, particularly if they are showing a showing a worse CSAT, that definitely would be a sign for me to keep monitoring differences.  You always have an option to split these scenarios via your data/reporting - if there is other impacts this is having for you internally (i.e. targets), which shouldn’t let that stop you gaining insights from each of these experience.

 

From my past experience, understanding the auto-closed scenario can be a very interesting one to measure, even via a separate survey to understand the ‘why’ it was not responded to.  You may not need to survey each time this happens - but can represent a key failed journey and one there may be multiple paths to help improve, and can commonly be a blind spot (which is where surveys come in handy to understand it).


Leave a Reply