Hi @monica.irizarry,
Are there specific AI features you're most interested in learning about? While other members jump in to share their experience, I can reach out internally to connect you with financial institutions who have implemented AI features and can share their experience.
Hi Alonso, that would be great! I am interested in learning from their experiences in general with the AI features, not one in particular.
Hi Monica! Although I am a “Qualtrician” now, I was a practitioner, at Fiserv, for many years, prior to making the jump earlier this year. While there, we actually tested most of the AI functionality last year and had it fully rolled out, in production for almost a year by the time I left.
We started with Reponse Clarity. What we found here was that the AI did a remarkably accurate job of identifying open ended responses that were a bit lacking in actionability and then would proceed to provide a relatively non-invasive prompt for more information. Of those that were presented with a follow-up prompt, 40% would go on to leave us more detailed information. We initially worried that this might cause drop-offs and damage our completion rate, but we did not find this to be the case, at all. The only notes we had were that we would have liked to have been able to customize the look and feel of the follow-up prompts a bit, so that they more closely echoed our brand standards.
After Response Clarity, we were given access to Insights Explorer. This gave us the ability to feed all sorts of unstructured data into the platform and then have the AI evaluate the data and provide us with concise and succinct summarizations. While this did not initially give us any huge “ah-ha” moments by telling us anything we hadn’t already known, it DID have the very surprising effect of being more easily accepted by our stakeholders as unbiased information. When we provided the Insights Explorer generated readouts to our teams, product owners and teams who had been less that receptive to the sometimes negative feedback were much more inclined to believe that it was accurate than they had been when it was coming “from a person” who may or may not have an issue with them. Further refinement of the datasets that we fed into the system did allow us to uncover some great, actionable insights that we might not have gotten too previously. I’d be more than happy to go into greater detail about that, separately, if you would like, some other time.
Next up was Conversational Feedback. For us, this was the single biggest game changer. At Fiserv, we had an attrition survey, that we would send out when a client “fired us”. We used this as a “safe zone” to test out this functionality, with the AI creating follow-up questions, as it was very low risk. Almost immediately, the follow-up prompts provided incredible insights into our processes and experience that we had never had before, because we had never had these kinds of bespoke and insightful follow-ups in a survey before. The early successes there encouraged us to go live with the functionality across all of our surveys rather quickly.
In addition to giving us more and better insights we also learned that we could substantially shorten many of our surveys - by up to 60% - based upon the strength of the details that we were getting from these AI generated questions. Put simply, there was no need to ask further questions in many cases, as the feedback we were getting was rich and actionable.
I hope this helps and please feel free to reach out for more details if you would like them!
This is very helpful!! Question - what is the difference between the Conversational Feedback feature and the Response Clarity feature?
Response Clarity presents one of a small collection of pre-determined prompts, in the form of a validation message, on the same screen as the question that is being answered. Conversational feedback are additional, AI generated questions that appear as “next questions”, on the following pages and are dynamically created, in the moment.