I have been exploring that task a bit and I think it has a lot of potential when it comes to building a database of categorized comments. I have been using it in conjunction with the Microsoft Excel task to save the ResponseID, the Open Ended response, and the ChatGPT categorization output to an Excel document.
I think that the categorization is probably most effective for analysis when specific topics are included in the query. Separately, I have given ChatGPT a couple hundred previously collected comments all at once and asked it to identify all the topics within. I then used that response to develop a topic library and asked the ChatGPT task in Qualtrics to look for those specific topics when it runs for newly created responses, separating them with a comma in the output.
@Tom_1842 Thanks for the response! Having AI categorize open ended comments sounds like a really interesting use case!
My follow up question to that if you don’t mind: Do you think that ChatGPT more accurately categorizes comments than the Text IQ feature in Qualtrics already? Or do you think they’re equal at tagging open ended responses correctly?
Also, you mentioned that you had given ChatGPT hundreds of comments to identify topics within. I’m not familiar with how that process would work. Were you able to upload your verbatims in an Excel file to ingest in Chat GPT? And how long did it take Chat GPT to analyze those couple hundred comments?
TextIQ uses keywords and logical strings to categorize the comments. "X" near "Y" but not near "Z". This will categorize the comments that meet the definition of the topic, but the definition can be time consuming to build and might not always get it right because of the different contexts in which a keyword might be used, even if the topic definition is robust. For example, it can be tricky for a telecommunication company to use keywords to distinguish between a device charger, a phone not holding a charge, and being charged for a phone bill.
ChatGPT works differently and would probably be better at distinguishing between the different contexts in which "charge" is used and without as much set-up. However, you get more with TextIQ than just categorization as it is fully integrated in Qualtrics versus having to build ChatGPT output in an Excel document. Having the TextIQ assigned topics available within the Qualtrics dataset and reporting tools is really helpful, TextIQ in the Survey Flow is great, its sentiment model is really good, and it supports topic hierarchies.
I think ChatGPT is helpful to develop Topic libraries, so you could use it to get started and then build definitions for the identified topics within TextIQ. I created an account at https://chat.openai.com/auth/login and then asked ChatGPT the following, though it could use some refining:
"Summarize the sentiment and experiences expressed in the following comments into Topics. In your reply, include a count of how often each Topic was touched on and sort the Topics with most mentions at the top and fewest mentions at the bottom. Also include a brief description of each Topic.
>Comment1]
>Comment2]
>Comment3]
etc."
ChatGPT returned a summary within a couple seconds. The “counts” weren’t totally right but the topics it identified were solid.
For example, it can be tricky for a telecommunication company to use keywords to distinguish between a device charger, a phone not holding a charge, and being charged for a phone bill.
That’s exactly what I’m experiencing. A word in Vietnamese, which is my language have different meaning in different context.
Another thing to ask, what do you think about this GPT ability to classify sentiment and need of follow-up with sarcasm comment? Text-iQ did not work well with it in my experience.
Should this GPT need more refining so it can be the conditioner? So it’s output could fuel more kind of task.
It looks like ChatGPT does support Vietnamese so I think it should do okay in distinguishing between different contexts. I have found the "Actionability" field produced by TextIQ to be pretty useful for identifying comments that could use follow-up, but I fortunately don't see sarcasm too often in the survey responses I deal with.
You can include multiple ChatGPT tasks in a Workflow before the Excel task. Then, when configuring the Excel task, output from the ChatGPT tasks can be configured as separate columns in the Excel document. So you could have the following columns in the Excel document if you had 3 ChatGPT tasks as part of the Workflow: ResponseId, Open Ended Comment, ChatGPT: Topic Categorization, ChatGPT: Comment Sentiment, ChatGPT: Sarcasm Indicator.
Thanks! These are all great ideas. I’ll have to test out its topic categorization soon.