4 Core Tuning Tips Every Center Needs To Know

Scott Bakken, Founder and President of MainTrax

When a consumer with a southern drawl told his healthcare provider’s call center agent that his body mass index was elevated, it’s no wonder the speech analytics tool didn’t register a hit. Yes, it was programmed to recognize the phrase, “My BMI is high,” but it wasn’t until we finetuned it that it recognized the phonetic alternative, “Ma bee emm ah iz ha.”

Speech analytics can improve your contact center’s operational efficiency and profitability, but only if you think of your speech tool as similar to the car you drive. You can’t just fuel it up (by plugging it in) and expect peak performance for months and years on end without regular tune-ups and maintenance. The more you intelligently adjust and attune your speech tool, the better the results.

That bears repeating: A speech tool is not a plug-and-play technology. You may be able to identify predetermined key words and phrases without much trouble, but that information alone will yield only so much value. Superior outcomes require analysts who can blend the science of technology with the art of discerning operational subtleties to produce high-value, actionable business intelligence. “You can’t just pull an administrative person from your call center and say, ‘You’re an analyst now,” says Matt Matsui, SVP, product strategy and marketing at Calabrio. “You need someone with analytical experience, and preferably someone who’s worked with a speech analytics tool. Our speech tool, which is rather sophisticated, is best operated by a dedicated in-house analyst or outsourced organization.”

That said, no matter how much experience and expertise your analysts possess, optimizing your investment in a speech tool requires testing and tuning—and lots of it. A new user of speech analytics should expect to go through 20 or more iterations of tuning during the first six months of operations.

A finely tuned speech tool provides a level of monitoring unachievable through random sampling of calls between agents and customers. It judiciously extracts data from every call coming into a contact center and translates it into meaningful information. This information can help companies lower operational costs, identify compliance risk, improve agent performance, detect complaint trends, capitalize on upsell opportunities, reduce customer churn and mine rich new veins of business intelligence.

Here are four essential tuning tips that every contact center needs to know.

Conduct a Content Audit

A content audit—listening to and analyzing hundreds of recorded agent-customer conversations—is an exacting, time-consuming process but an essential step in any wellplanned speech analytics initiative.

Perhaps the greatest mistake made by contact centers is assuming that they can accurately guess what words and phrases customers actually use when talking to agents. “Many companies build a basic library using the most obvious key words, then flip the switch and anticipate that the speech engine will produce quick and accurate insights,” says Rebecca Gibson, contact center solutions consultant at Interactive Intelligence. “Unfortunately, without first going through the audit, verification and tuning process, the best outcome you can hope for is mediocrity.”

A content audit helps take the guesswork out of contact center operations by providing insight into the root causes of calls, which enables you to pinpoint the keywords and phrases that determine customer intent and predict outcomes. Learning how often particular business issues surface and how often related phrases are actually uttered provides a benchmark from which to start tuning the speech engine.

A content audit can also expose potentially problematic issues such as deviations in language, dialects and cultures among customers. What’s more, learning why customers call and what their most common issues are empowers contact center management to train agents to better handle those issues.

Savvy speech analysts don’t view the initial content audit as a one-and-done exerc ise. It’s advisable to audit a sizable sampling of calls from time to time to discern if call flow has changed and whether keywords or phrases should be added to, removed from or modified in the speech library. It becomes especially important to conduct a content audit whenever a new business objective is introduced.

MY AGENT SAID WHAT??

BELIEVE IT OR NOT, THE FOLLOWING STATEMENTS WERE ACTUALLY UTTERED BY AGENTS.

“Our system has you as a man, that’s why it denied the claim for the hysterectomy as medically unnecessary.”

“We are not going to lease you equipment if we don’t know what your credit check is… and if you are really going to pay us, pay your bills and all of that.”

“You are not familiar with our product? WOW, I just can’t believe that.”

Compile a Library of Keywords and Phrases

The quality of your speech library will make or break your speech analytics initiative. That’s why it’s so important to conduct a content audit before “flipping the switch” on your speech tool and going live. The knowledge and insights produced by an audit will enable you to fortify your speech library with the most appropriate and relevant words, phrases and categories.

Conducting a content audit enables you to better understand the real “voice of the customer.” This understanding inevitably leads to the discovery of unanticipated high-value phrases that are similar to key phrases already in your speech library, but different enough to warrant inclusion on their own. For instance, a content audit may reveal that customers are more likely to say “terminate my service,” “stop my subscription” or “get rid of my subscription,” in place of expected phrases like “cancel my service” or “cancel my subscription.” Gone undetected, these minor differences can significantly distort results.

Unless you’ve got a crackerjack analyst on staff who has an expert understanding of keyword library development and is capable of performing a rigorous content audit, it may be wise to solicit the help of a professional services firm to ensure that your speech library will produce optimal results. We’ve seen contact centers end up with do-it-yourself detection accuracies of less than 50% and as low as 10%. In contrast, a professional consultant can often raise the detection accuracy of your speech tool to as high as 70% after the first tuning.

When building your speech library, keep these three considerations in mind:

LENGTH OF PHRASE
Individual words are harder for a speech engine to pick up than phrases comprised of multiple words. On the other hand, a phrase shouldn’t be too long. For instance, attempting to identify the following phrase only when it is uttered in its entirety will yield few if any hits:

“I’m trying to set up my account but I’m having trouble figuring out how it works and I’m getting very frustrated.”

Run-on sentences like this are called “bluemoon phrases” because they’re only uttered once in a… well, you know. You will have more success identifying important customer statements by breaking up a blue-moon phrase into smaller pieces like so:

PHRASE CONSTRUCTION

If you’re operating a phonetic speech engine (as opposed to speech-to-text, which produces a transcript that is then searched using textmining methodologies), it will be easier to detect phrases containing hard consonants and five to seven syllables (the equivalent of 15 to 18 phonemes, the approximately 44 sound fragments that comprise North American English).

Sample phrases containing hard consonants:

Sample phrases not containing hard consonants:

AVAILABLE BANDWIDTH TO REVIEW RESULTS

Generating a large number of hits will only be useful if you have the resources to review, interpret and analyze them all. Lack of followup bandwidth may require you to be more judicious in your selection of keywords and phrases (or to adjust the confidence thresholds for those phrases as outlined below) so that the speech engine isolates only those phrases that have the highest business value.

Define and Refine Confidence Thresholds

A confidence threshold is a setting (most effective at the phrase level) that allows the user to filter reported hits based on how certain the engine is that the hit is accurate.

Think of setting confidence levels as a pyramid: The higher up the pyramid you go, the greater the percentage that you will generate accurate hits and only accurate hits. Figure 1 shows the results of searching for the phrase, “reduce taxes”; at a confidence level of 85, only one hit was detected but it was accurate. As you can see in Figure 2, there is also value in setting a confidence level near the bottom of the pyramid.

Low-confidence thresholds instruct the speech engine to be less discerning, increasing the odds that high-value phrases (such as “I will sue you”) will be detected. Lower thresholds are typically set when an organization is seeking to capture high-value information and has the resources to review a potentially large number of hits. The trade-off: more false positives (a false alarm that occurs when a search mistakenly identifies a hit on a keyword or phrase that wasn’t actually spoken).

High-Confidence Thresholds Increase the Odds of Detection Accuracy, FIGURE 1

High-confidence thresholds instruct the speech engine to be more discerning, increasing the odds of detection accuracy. Higher thresholds are typically set when the user wants to collect a sampling of calls with a high likelihood of accurate hits, rather than trying to catch every instance. The trade-off: more false negatives (when a search fails to generate a hit on a key word or phrase that was actually spoken).

Not all phrases are created equal; they don’t all behave the same way when confidence levels are raised and lowered. In many cases, an integral aspect of the tuning process is determining at what point detection accuracy for a particular phrase drops off the table. For instance, a confidence level of 20 usually generates exceptionally poor results. But for one client, the phrase “money- back guarantee” held its own with nearly perfect detection accuracy all the way down to the 20 range.

Low-Confidence Thresholds Increase the Odds That Phrases Will Be Detected, FIGURE 2

Ideally, multiple iterations of tuning and testing will enable you to arrive at a confidence level that is not so high that many high-value phrases will go undetected, but not so low that you’ll generate too many hits on meaningless phrases. The acceptable level of tension between false positives and false negatives depends on the value of the targeted phrase.

Perform a Detection Analysis

Unlike a content audit, in which hundreds of actual calls are listened to from start to finish, a detection analysis reviews all generated hits to determine whether they were accurate and meaningful.

A detection analysis enables you to:

At the heart of detection analysis is the relationship between content and context. One client initially believed that “I’m not sure” was a classic example of an agent “lack of knowledge” phrase. But we discovered during a detection analysis that almost every agent used that expression before diagnosing the caller’s issue.

For example:
Caller: “Here’s my problem… can you fix it?”
Agent: “Well, I’m not sure… let’s take a look.”

However, we were able to determine that adding qualifiers like “quite” or “exactly”—as in “I’m not quite sure” and “I’m not exactly sure”—changed the context of the phrase, improved detection and produced business value.

A detection analysis is an essential element of quality control. Results allow you to make adjustments designed to increase efficiency, improve detection accuracy and extract additional business intelligence.

CONTENT VS. CONTEXT

CALLER:
“Here’s my problem… can you fix it?”

AGENT:
“Well, I’m not sure… let’s take a look.”

The Learning Never Ends
You’ve conducted a successful content audit, built a customized speech library, defined and refined confidence thresholds and performed a detection analysis to validate and improve results. Congratulations. Now do it all over again. And don’t ever stop.

Without fail, users of speech analytics discover that the learning never ends and the benefits keep accruing. “In a series of post-deployment ‘best practices and knowledge transfer’ sessions, we learned more about the art and science of speech analytics than we did about the features and functions,” says Maggie Wells, director of program management at Wellpoint. “We’re looking forward to seeing how speech analytics can continue to improve the quality of service we provide to our customers.”

– Reprinted with permission from Contact Center Pipeline, www.contactcenterpipeline.com

Exit mobile version