Artificial Intelligence is rapidly infiltrating new markets, and the customer experience sector is no exception. While customer experience artificial intelligence is still nascent, AI for customer experience shows tremendous promise, both as a tool to measure experience and as a lever to improve it.

There’s no question that AI is a powerful tool. There’s also no question that “AI” is often slapped onto products and software without most people knowing what it means or what value it adds.

“…for most [machine learning] projects, the buzzword “AI” goes too far. It overly inflates expectations and distracts from the precise way ML will improve business operations,” writes Eric Siegel in the Harvard Business Review. 

So, is AI for customer experience just hype? Not necessarily, but you need to know how to use it so you can decide for yourself whether it’s the right fit for your company.

Will AI capture the nuances of the customer experience? And can it account for diverse customer expectations, subconscious reactions, and a range of sensations and feelings? It depends.

And if you’re in a niche market, is the expense of AI worth it? Can you justify the cost to your CFO? Again, it depends.

Jump to: The 5 Scenarios where AI IS and is NOT the Right Fit

The Quick and Dirty Truth about AI for Customer Experience

First, if you are a Customer Experience Director with only 30 seconds to spare, here’s the short guide to AI for customer experience. There’s detail about each of these topics below, so keep reading if you want to learn more, or get in touch.

When customers give feedback through surveys and in day-to-day conversations with your company, that’s unstructured data. Unstructured data is invaluable for understanding customers’ feelings and thoughts, but only if your analysis respects the nuances.

This is so important that it bears repeating: analyzing unstructured data reaps incredible value, with the caveat that you honor the nuances and subtleties inherent in comments and conversations.

The first step in analyzing any large unstructured dataset is called “tagging.”

Whether you’re using AI or a team of researchers to analyze your data, the tagging process is the same – and it requires human input. We’ll explore the details of unstructured data and the tagging process below, but the fundamentals are unchanged. Tagging is tagging.

Does every comment need to be tagged? No. Statistical sampling methodologies will always give you an accurate representation of any population. That’s the purpose of sampling, so you don’t need to analyze every comment or conversation — and it works exceptionally well.

So where does AI for customer experience come in?

AI provides leverage for categorizing comments in large, continuously updated datasets.

Used properly, AI can extract meaning from unstructured text. Natural language processing in particular enables sentiment analysis, entity recognition, text classification, and topic modeling. But keep in mind, all AI tools require a team of researchers to train and check the algorithm.

But for many companies, adding AI to analyzing unstructured data is not always required.

Especially for many B2B companies, AI isn’t necessary to achieve rigorous Text Analysis of comments and conversations. But the hype and confusion over AI makes many Customer Experience Directors think it is.

AI for Customer Experience: How It’s Used

AI is being applied in different ways to improve the customer experience, with varying levels of success. One of the most widespread uses of AI to improve the customer experience is the chatbot, which attempts to mimic a human conversation.

Early chatbots were text-based and provided a limited set of pre-composed answers. Today, AI-driven chatbots use natural language understanding to help users solve problems. Unfortunately, customers often report that chatbots are frustrating, and the technology likely needs to evolve again before we hear from customers that their experiences have improved.

Using AI to measure experiences is also a growing trend, and some companies are using AI for customer experience to write survey questions. But given the uniqueness of each company and its objectives, this can lead to underwhelming data.

Other companies are using AI for customer experience to gauge call center conversations and customers’ answers to open-ended survey questions. Here, the goals are to:

  • Assign sentiment scores for each conversation or comment
  • Identify emergent themes within a corpus of content

The organizations that tend to benefit from AI platforms have ongoing conversations and surveys that yield massive, fairly consistent datasets.

In this situation, research teams (sometimes called the training team) work with AI to define, guide, set, test, and refine its algorithms. In fact, there are typically two research teams: one working on the client side, the other working directly for the AI company. Often, the research teams comprise a total of 10 to 12 employees or more.

Although AI learns from itself, it also needs training to be accurate enough to generate insights you can genuinely trust — and build your organization around.

Sentiment (how customers feel) is the easiest part of verbatim tagging, but we’ve seen AI algorithms with insufficient training incorrectly tag sentiment more than 70 percent of the time. It takes an investment in teams that know how to tag for that number to go down.

What if you’re a smaller company without that huge dataset? Or what if you are a niche B2B company in which customers and staff use specialized vocabularies?

You still need to understand your customers’ opinions and sentiments, but the cost of partnering with an AI company like Clarabridge can be enormous. More importantly, your analysis and insights won’t be accurate because AI needs a large dataset to train on before it becomes consistent and precise.

Unstructured Data: Your Treasure Trove

So, how exactly are customers’ opinions gathered and analyzed?

When companies record calls or collect open-ended customer feedback, they end up with data outside of “yes/no” answers or numeric rankings. Instead, the data includes customers’ feedback in the form of spoken or written sentences like:

  • “The product was great, but the wait time took forever!”
  • “The staffer on the phone was so rude that I’m never shopping with you again.”

Sadly, many companies don’t know what to do with their unstructured data, so they never bother to analyze it, or they use cheap, out-of-the-box software for “insights.”

While the vast majority of meaningful data is unstructured, a recent article in MIT Management reports that only 18 percent of companies analyze it. 

Unstructured data is a treasure trove of customers’ thoughts on your company, but only if you can analyze and quantify its meaning in an unbiased way. One of our specialties at Interaction Metrics is rigorous Text Analysis – where we glean objective, measurable insights from unstructured data.

Tagging Explained

Tagging is the first step in Text Analysis. It’s the method by which unstructured data is categorized and quantified to reveal meaningful insights. Several analysts work together to build a coding framework iteratively, going through multiple cross-checks. Then, that framework is used to classify comments by their various elements in descending order of specificity.

Like a taxonomy chart that starts with ‘kingdom’ and ends with ‘species,’ a coding framework starts broadly and narrows into specific categories. Tagging typically follows a process like this:

  • Hypothesize: Examine a portion of the comments to understand their meaning and develop an initial tagging framework.
  • Iterate: Iterate the tagging framework until it accurately captures the text’s meaning.
  • Define: Build out tag definitions with examples to ensure tags are assigned objectively.
  • Cross-check: When multiple analysts assign the same tags, the system is replicable.
  • Quantify: Add statistical functions to quantify tags and analyze the data.

This tagging process is always the first step toward extracting measurable insights from unstructured data.

The advent of AI and machine learning means tagging can be done on a massive scale with enormous datasets.

However, it’s important to remember that even when AI and machine learning are involved, a team of researchers is still required to establish and test the coding framework and ensure it is tagging both the sentiment and theme correctly. The presence of AI doesn’t negate the need for a team of researchers; it just augments their work.

And whether it’s AI plus a research team or a team of researchers working with the tools of classic social science, the tagging process remains the same: qualitative data is labeled in order to be quantified.

Sampling: A Classic Statistical Method

Not every piece of data needs to be tagged to achieve accurate insights, thanks to statistical sampling techniques.

Statistical sampling of a population is the process of selecting a subset of individuals or items from a larger population to make inferences or draw conclusions about the entire population.

Within any given population, there are natural limits to diversity. Once a population is defined, then roughly 370 individuals randomly selected within that population will represent the rest of the population quite accurately. Think of the ripples on a pond’s surface: you don’t need to measure the height of every single ripple if you can capture a random subset of ripples.

Sampling is critical for Text Analysis. Because doing it correctly is essential to many customer experience initiatives, I’ll describe it in more detail in upcoming blogs.

How to Tell if AI is Not the Right Fit

Adding AI to Text Analysis can be a significant advantage for some companies that regularly accumulate large datasets of unstructured data – but it doesn’t mean that AI is the solution for analyzing all text-based data. And it certainly does not mean that AI is the only way to achieve text analysis.

I recently heard from a customer experience director of a multi-billion dollar company who had hired a large AI firm to analyze her company’s customer feedback. The AI algorithm required two research teams to train it for her project: five people working for the company itself, and another four people working for the AI company. The CX director raved about the insights that AI had produced from her company’s unstructured data.

She didn’t realize that those same insights could be achieved with a team of researchers and classic tagging and sampling. And in this case, her dataset wasn’t large enough for AI to add value. It wasn’t the AI that provided the insights; it was the nine people training the AI.

AI software salespeople often tell Customer Experience Directors they MUST buy an AI-powered text analytics solution to understand their data. But that just isn’t true.

The 5 Scenarios where AI IS and is NOT the Right Fit:

  1. If you are a midsize or smaller organization, it’s unlikely that your dataset is large enough for AI to work profitably. There’s simply not enough material for AI to learn from to tag accurately.
  2. You collect isolated datasets: AI would not make sense if you were only doing a once-yearly tracking study or a one-time survey, given the upfront costs of training it.
  3. Your dataset isn’t continuous: AI makes sense if your data are updated continually, for instance, if you are fielding thousands of customer calls daily. But if your data doesn’t come in a constant stream, then the AI model needs to be retrained with each new batch of data.
  4. You don’t have the budget for AI. The cost for AI-based text analysis adds up quickly, so bypassing AI and only working with a research team to do the tagging can achieve the same results at a fraction of the cost.
  5. You’re a niche company whose customers use many specialized terms that AI won’t understand without thousands of training hours. If your customers use acronyms, codes, and incident numbers, it will be challenging to leverage the power of AI.

In these scenarios, the best way to extract insights reliably is by using a research team adept with tagging techniques — but without adding AI. At best, adding AI would be an unnecessary expense. At worst, it would result in inaccurate analysis.

To summarize, if you have a large, continuous dataset, investing in and training AI makes sense. But if your dataset isn’t large, continuously updated, or involves specialized terminology, then it’s just not worth it.

Intelligence: It’s Not Just Facts, It’s Questions Too!

Because AI IS the topic du jour, it’s vital to keep on top of its powers and limitations. Right now, the power of AI is its access to a vast repository of information.

However, one thing AI is not doing is asking smart questions. It might be able to spit out prompts like, “Tell me about your childhood” or “What did you have for breakfast?” but it’s not yet capable of the intellectual curiosity that makes us stop to think and wonder.

  • Socrates’ curiosity prompted the Athenians to self-examine their lives.
  • Copernicus’s curiosity led to the heliocentric understanding of the solar system.
  • Newton’s curiosity led to the law of universal gravitation.

These examples set a high bar, but the point is clear. Intelligence is NOT simply the accumulation of facts; it’s equally – if not more — about curiosity and asking good questions.

Will AI ever ask paradigm-shifting questions—the kind that change how we understand the world? Maybe. But for now, it’s humans who ask the questions, and it’s our responsibility to ask the best questions we can.

Applying curiosity to the customer experience is often the difference between passing-grade experiences and those that amaze us. There are countless great questions to ask about your company’s customer experience. Here are a few:

  1. What unstructured data do we generate that we’re failing to examine?
  2. Are we asking our customers open-ended questions that get them to open up in the most honest ways?
  3. Is it possible that we’ve heard from customers with issues, and didn’t respond?
  4. Worse yet, might we have processes that discourage customers from giving us honest feedback? (We often see this when companies insist that their surveys be sent from a donotreply email address.)

Tools in the AI Toolbox

AI for customer experience might be limited, and it may not be the best match for your niche or B2B company, but that does not mean that AI is useless in the B2B customer experience setting. Far from it. There are lots of simple AI-driven tools that will save you loads of time every day. At Interaction Metrics, in addition to large AI platforms, here are just a few of those simple AI tools that we find useful:

  • Sonix.ai, is an inexpensive automated transcription service. It’s much more accurate than most free transcript tools and can add bullet points to summarize conversations. However, it’s not useful if you need to rigorously evaluate a conversation and assign a score. That’s because so much of what’s ‘said’ is not said or expressed through tone and conversational pauses. AI still isn’t able to account for these nuances and subtleties.
  • Bard and ChatGPT4 can help accumulate information when writing. Of course, that information needs to be thoroughly fact-checked, but these tools can offer a starting point for content. Beware never to ask these tools to source direct quotes. They can spit out made-up quotes and false attributions, and you’ll find yourself in nonsense land with “quotes” from Gandhi on machine learning and Lincoln on email!
  • Canva’s Magic Edit and Magic Eraser allow users to transform images and remove objects from pictures easily.
  • And Grammarly is an essential tool for correcting punctuation, spelling, and grammatical errors. Every email, every blog post, and every piece of writing that the consumer reads should be run through Grammarly before it’s published.

The bottom line is that AI is a tool, and tools need to fit the needs of the user — not the other way around. AI can certainly be the right solution for your company if it’s designed specifically for your industry. That means it’s guided by human researchers who are experienced in your specific field.

The Right Tool for the Job

At Interaction Metrics, our rigorous Text Analysis services offer objective insights for two main use cases:

  1. You need a research team to train and quality-check your AI platform.
  2. You have a smaller or standalone dataset, or one full of technical terms, and partnering with an AI platform is cost prohibitive.

There’s no need to buy a backhoe when a spade can do the job better.

At Interaction Metrics, we know how to measure and analyze both conversations and comments. And we know when and where to apply AI profitably.

So, if you’re using AI and need help calibrating it OR have a dataset that doesn’t lend itself to AI, reach out!

Jump back to The 5 Scenarios where AI IS and is NOT the Right Fit. 

Toward making sense of all that unstructured data that gives us our beautiful and strange world!

To find out about how to save money with AI, get in touch.

 

 

Categories: Text Analysis
E-book cover

Get our CX Trends Report, published quarterly.

  • This field is for validation purposes and should be left unchanged.

Researched by the Analysts at Interaction Metrics, we highlight the latest developments in Customer Experience.