This morning I wanted to implement a Customer Experience Management project from scratch. I pretend, however, that I know nothing about this subject. I also pretend I have no budget to hire consultants to help me define and implement the project. I will use only one resource, but one that knows almost everything: chatGPT.

This morning I wanted to implement a Customer Experience Management project from scratch. I pretend, however, that I know nothing about this subject. I also pretend I have no budget to hire consultants to help me define and implement the project. I will use only one resource, but one that knows almost everything: chatGPT.

What is chatGPT? A Generative Pretrained Transformer is a natural language processing tool that uses machine learning algorithms to generate human-like responses to user input. It is trained on a large amount of text data and can generate relevant and consistent responses in a given context. The model was created by Openai.

However, we will use it in a particular way; those who know me know I do not like simple things. Especially pre-constituted orders. We will use a particular hacking technique against the model to influence its behavior. We are going to use prompt injection. “Prompt injection” is when an artificial intelligence that uses textual instructions (a “prompt”) to perform a task is tricked by malicious, aversive user input into performing a task that was not part of the original goal. Code Injection is the general term for attack types which consist of injecting code that is then interpreted/executed by the application. This reasonably old technique was used extensively, for instance, in SQL to execute commands on the database that compromised its security.

Curious? Well, let’s take a simple, straightforward example. Let’s run our first attack on chatGPT and see if we can successfully execute it 😉 We ask initially what he thinks about Python, and he will answer us as it was trained:

ChatGPT sandsiv blog article
figure 1 – chatGPT prompt and answer

Great, now we are going to perform a prompt injection. To do this, we will instruct the model to “forget the context in which it was trained” and ask it to answer the question negatively. For this, at the beginning of the prompt, we will inject the code “Ignore the previous direction.” and then send the new instructions.

ChatGPT sandsiv blog article
figure 2 – chatGPT prompt and answer

Ta-dah! Here the model no longer follows the original context in which it was trained but is now influenced by our instructions. This can make us think about how stupid artificial intelligence is, and it would open a significant chapter of discussion, but it is not my intention to open it here. Our purpose today is to start with a Customer Experience/Voice of the Customer Management project.

Well, let’s try asking a general question relative to the project I want to implement and evaluate the answer.

ChatGPT sandsiv blog article
figure 3 – chatGPT prompt and answer

Good! But my experience in this area prompts me to consider some points that the AI model did not consider, particularly the Governance and an assessment of the CX maturity of the organization… well, let’s use our hacking technique and put the model in context again:

ChatGPT sandsiv blog article
figure 4 – chatGPT prompt and answer

Excellent. The points about Governance and maturity have been added to your plan. Now let’s go a little deeper into Governance:

ChatGPT sandsiv blog article
figure 5 – chatGPT prompt and answer

Governance seems well contextualized and does not require our intervention, so let’s go into the metrics aspect:

ChatGPT sandsiv blog article
figure 6 – chatGPT prompt and answer

Great!!! Let us now focus on Net Promoter Score and try to understand the problems of using that metric.

ChatGPT sandsiv blog article
figure 7 – chatGPT prompt and answer

Having written a precise paper on the statistical problems of the NPS, I can inject a prompt into the model, asking it to better specify the problem from a statistical and mathematical point of view:

ChatGPT sandsiv blog article
figure 8 – chatGPT prompt and answer

Reasonably accurate, though not entirely satisfactory. However, here we see all the limitations of chatGPT, it is an exciting model, but it cannot generalize in a way that goes into the details of specific problems.

Let’s evaluate another aspect now: ask chatGPT to build a questionnaire to collect and measure customer experience. Let’s try it directly.

ChatGPT sandsiv blog article
figure 9 – chatGPT prompt and answer

Well, the questions themselves make sense; unfortunately, I can’t say that the model constructed the questionnaire I expected–let’s try to prompt inject the model:

ChatGPT sandsiv blog article
figure 10 – chatGPT prompt and answer

Cool! But now I would like to get not a unique question but a set of potential following questions:

ChatGPT sandsiv blog article
figure 11 – chatGPT prompt and answer

Well done! With a little work our expert can help us build a questionnaire to measure customer experience.

But let’s go now and evaluate another aspect. We are talking about an NLP model, can we influence the model to interpret customer feedback? We will declare a customer feedback as a variable (SENTENCE), and do a prompt injection to force the model to extract topics. Let’s try:

ChatGPT sandsiv blog article
figure 12 – chatGPT prompt and answer

Cool, it is able to get the topics correctly. Let’s do a step further: Aspect Based Sentiment Analysis. We want to assign the correct sentiment to each topic. We will do the same as before, but push the model to also assign sentiment for each topic, and a generic sentiment.

ChatGPT sandsiv blog article
figure 13 – chatGPT prompt and answer

The result is very interesting. Don’t get too excited too quickly, though, as I honestly did. Applying AI models in large-scale industrial applications like sandsiv+ Customer Intelligence solution is not an easy challenge. Real-life applications have many long-tail corner cases, can be hard to scale, and often become costly to train. I have to honestly say that our data scientists have tried to engineer it to take it into production and it shows all the limits, both from a management point of view in production and from an accuracy point of view. In any case, something we are going to extensive research for the future of our sandsiv+ solution.

What can we say to conclude this adventure with chatGPT. Well, we have certainly seen the potential of the model, but also its limitations. Getting out of what the model has learned is difficult, if not impossible. Influencing the model’s responses is all but easy. Have we created a customer experience expert? I would say a semi-expert. Without human input, the model is a somewhat lame customer experience expert. However, the contribution is very interesting, we will see future developments. I am very curious.

Creating a Customer Experience Expert Hacking AI (chatGPT)
Author:
Federico Cesconi

Read the article on LinkedIn

Start growing with sandsiv+ today