The Next Great Evolution of CX: Bridging ChatGPT and Customer Support

Evan Tann | Co-Founder & CTO of Thankful


OpenAI’s ChatGPT-3.5 has officially proven that chat is now a viable interface into any body of knowledge and the world has varying opinions on its impact. From an infinite portal of positive possibilities in and outside the business world to doomsday labor theories, no one was left untouched by this technological development.

 

With a projected $1 billion in revenue by 2024 (and a rumored $10B investment from a little brand called Microsoft), OpenAI's ChatGPT is changing everything from search to customer service and everything in between. It has meaningfully moved the needle in natural language processing and unintentionally, the universe of customer support. 

 

Although ChatGPT wasn't intended to be a catalyst for customer support, by nature of its human-like responses, limitless generative chat interface, and architecture it will absolutely be a central part of customer communications going forward. Is it ready to be fully deployed into your customer support ecosystem now? Not quite, but Thankful is actively testing new ways to make this a reality. 

 

But what’s the big deal about ChatGPT and how does this generative text platform move from a demo environment to a dynamic tool for customer support?

 

Why ChatGPT is A Breakthrough: Large Language Models Explained

ChatGPT’s cutting-edge language processing AI model has created a stencil for the future of customer support and automation as we know it. OpenAI advanced the state-of-the-art in AI by building the largest-ever neural network trained on just about all publicly available human knowledge in what's called a Large Language Model (LLM). 

 

ChatGPT's LLM has more than 175 billion parameters that encode this knowledge. That number can be thought of as its capacity to learn, remember, and create new connections between disparate words, concepts, and things. Its replies appear as if they were written by a human, and no matter how outlandish the prompt you give it, such as, "Write a Shakespearean sonnet about a basset hound astronaut landing on the moon" it effortlessly replies appropriately (see below).

Thankful AI and ChatGPT Sonnet

Even in this silly example, for ChatGPT to deliver a good result, it must understand what makes a sonnet "Shakespearean," what a "basset hound" is, and improvise how a dog might land on the moon – would it be in a ship with its own dog-sized space suit? ChatGPT fills in these gaps with a very human-like imagination produced entirely through statistical relationships between words learned on grand scales.

 

As remarkable as ChatGPT is, its parameter count will be dwarfed by the next version which is already under development by OpenAI today. With endless possibilities over the next few years, CX will undoubtedly advance for the better.

 

From Breakthrough to Bright Future: Where ChatGPT Is Headed

The future of customer support is promising with the advent of chatbots and other AI-powered tools such as ChatGPT. These tools are revolutionizing the way brands interact with their customers, providing faster and more efficient support while also freeing up human agents to handle more complex tasks.

 

One of the main benefits of ChatGPT is its ability to handle a large volume of customer inquiries while addressing every issue in a unique, human-like way. This means that customers don't have to wait in long queues or on hold to get the assistance they need. Instead, they can simply send a message and receive a prompt response. An AI trained on your support tickets will ultimately be able to match or exceed the CSAT of your best agents.

The future of customer support automation

The future of customer support automation will be a comprehensive AI platform that treats everyone as individuals, handles every situation with empathy, and perfectly enforces business policies and processes in line with a brand’s voice, or even make exceptions to a policy in brand-approved ways depending on the circumstance. By extending LLMs with external systems such as a policy engine and order management system, we also believe that Its decision-making will be explainable and have the ability to be re-trained instantly when policies or processes need to change.

 

A near future advancement we can anticipate is that the models will start to integrate with external systems. This will enable AI to go beyond FAQs and start making changes to customer orders and accounts and thus getting closer to delivering true service. Incorporating integrations will undoubtedly increase the percentage of cases that AI can resolve and meaningfully do some of the heavy lifting for customer support teams. Teaching the AI to understand your APIs and how to integrate into backend systems will ultimately remove much of the burden that your engineers would otherwise need to take on.

 

Another future scenario is that it will gain an understanding of when it doesn't know an answer to something, whether by a low confidence interval or reinforcement training performed by AI managers. This is key to a proper customer support escalation and the ability to pass the problem off to a human who can deliver better action.

 

As ChatGPT becomes more sophisticated over time, it will no doubt change the way customer support and operations work with their existing CX technology partners. 

 

The Move From a Demo Environment to a Dynamic Customer Support Tool

Since ChatGPT wasn't originally built for customer support, there needs to be a bridge that enables it to integrate and advance customer support interactions.   

 

In customer support circles, the application of ChatGPT is seen as an efficient self-service chatbot that can solve surface-level issues. However, this is at odds with providing true service to customers; true service is when you can solve their problems and answer complex questions. This occurs through AI software integrations like Thankful that connect into business intelligence systems, a customer’s purchasing history, and a brand’s policy and processes to provide service on a more holistic level. 

 

There are 6 core events that need to happen in order for ChatGPT to function within a customer support ecosystem: 

 

#1  Another System  to Interact with the Generative Text Layer 

The prompts you give to ChatGPT are incredibly important to get right, as the output varies wildly depending on the prompt. In response, a new field is emerging called "Prompt Engineering," in which prompts are generated to achieve some goal in the best possible form for ChatGPT to understand and take the right action.

 

#2 Integration into Business Systems

Even when ChatGPT takes the right action, for it to be effective in customer service, it must use external systems. For instance, it must integrate with a company's order management system to cancel an order. This mechanism of integrating ChatGPT into other systems is another area of active research.

 

#3  Ability to Apply Brand Policies and Processes

An understanding of company policies and the ability to quickly adjust those policies when needed through an external policy engine that is easily modified and changed.

 

#4  Measure performance

For any highly complex system, we need to be able to measure how it's performing over time. There is no industry-standard test for generative models to measure their performance, and designing a mechanism and process to measure improvements or regressions is critical before deploying this into production. This ensures that the AI model works as intended across the entire range of tasks it's expected to perform. Its especially important as the prompts and underlying LLM are updated and improved over time.

 

#5 Enforce Logical Consistency

The answers it provides a customer must be logically consistent not only within a message or conversation, but also across every message it sends on behalf of a company. It's easy to get ChatGPT to contradict itself today, but those logical inconsistencies will need to be identified before a reply goes to a user, with agent corrections automatically feeding back into the model.

 

#6 Guaranteed Brand Safety

Similarly to enforcing logical consistency, we'll need to ensure that every answer it provides a customer uses the appropriate brand voice. Especially once a user knows that they're interacting with an AI, we need to ensure that the model cannot be "tricked" into replying inappropriately – such as offering a refund where inappropriate, or repeating hateful language. This is an area of research called Adversarial Machine Learning, where one party tries to trick the AI into doing something the designer never intended.

 

At Thankful, we couldn't be more excited and invested in the possibilities of ChatGPT in customer support. While the conversation revolution charges on, some businesses will want to remain rigid in their responses to customers, and some more freeform. Thankful’s plan is to give businesses control over how generative they want their conversations to be.


Evan Tann | Co-Founder & CTO of Thankful

Share this article