Technology

Speech Analytics and Modernizing Agent Performance Measurement with the Customer in Mind

Scott Bakken, MainTrax
Scott Bakken, MainTrax

Editor’s note: This is the first in a series of articles on analytics trends and technologies by Scott Bakken, CEO of MainTrax, a speech analytics professional services company. Scott is a highly respected industry thought leader and a regular contributor to Contact Center Pipeline. In his role at MainTrax, Scott is continuously learning about leading-edge contact center technologies to ensure that his company can leverage whatever solutions his clients use. This series will focus on emerging technologies, the markets they are designed to influence and the benefits they will bring.


“Who decided that three was the optimum number of times an agent should say the name of a customer?”

Speech analytics using machine learning and AI are already making an impact within contact centers, but most of the 30-plus vendors I collaborate with are still trying to figure out how to best utilize their own technology. My company has built hundreds of customized quality assurance speech analytics applications for organizations across all verticals, which means my team has reviewed just about as many agent scorecards. But when one of my junior analysts asked me that question, I had to pause. Who did decide that three was the perfect number of times? Did research show that two wasn’t enough? Did four fail to move the needle? Or did someone instinctively decide that three felt right and suddenly it became the norm?

Don’t get me wrong. Monitoring for quality assurance is essential. Businesses can reduce churn, lower the cost of service, increase sales conversion, and improve upsell/cross-sell rates if they can reduce the effort of the customer experience based on QA insights. Traditional methods of tracking what agents say (or don’t) informs managers which skills and behaviors need improvement in order to deliver a great experience.

But the criteria often seem so, well, arbitrary. Sometimes I get the impression that our clients really have no idea what ultimately impacts the customer experience. It becomes a case of trial and error. For example, empathy is all the rage these days, but just how much or little of it contributes to churn? Or consider call duration, which is also a common metric often tied to customer effort. The general consensus is the shorter the better. But perhaps if agents spent more time proactively solving future issues, they may be able to ward off follow-up calls.

I know I’m not the only one who has some doubts. A survey by CEB (now Gartner) indicated that only 12% of service leaders were confident that their QA process delivered tangible business results.

Developing Agent Impact Scores Based on Actual, Measurable Business Outcomes

As luck would have it, I received a new white paper a few days after my analyst asked her question. It seems as though industry innovator Tethr has been taking a novel approach to this dilemma: conducting a deep analysis of actual data directly tied to the outcomes valued most to determine what should be measured and how agents should be scored. Tethr claims that “moving beyond traditional quality assurance practices improves the customer experience and aligns agent performance to business outcomes. By measuring the agent performance metrics that actually matter to their business, companies see tangible results.”

That makes sense to me because, here at MainTrax, we practice what we call “reverse engineering.” In this case, you need to first unravel what truly impacts customer experience before developing your agent performance strategies. According to Tethr’s white paper, research conducted by “The Effortless Experience” team at CEB (now Gartner) revealed that “only 9% of customers who had easy interactions with a company displayed any sign of disloyalty compared with 96% of customers who had difficult experiences.” The key in my mind is how do you define a difficult experience?

But what I found most compelling is that Tethr uses AI and machine-learning techniques to assess more than 250 variables—as well as combinations and sequences of those variables—to determine a customer effort score, which they refer to as the “Tethr Effort Index.” For example, TEI factors in what happened before the call, such as whether the customer attempted to self-service.

But taking it a step further, Tethr then weeds out everything agents can’t control to create an Agent Impact Score (AIS). Unlike traditional QA scorecards and checklists, AIS measures only those agent behaviors and actions that have been proven to move the needle on the business. After all, it doesn’t seem right to hold agents accountable for things out of their hands.

Agent Impact Scores reflect how a customer conversation ensued (who was talking when, indicators of active conversation, etc.) along with explicitly and implicitly articulated effort drivers presented either as part of the interaction or about how the agent handled concerns regarding other parts of the customer journey.

All of these calculations are then synthesized down to an algorithmically derived, objective score based on a simple 10-point scale that makes it easy for leaders and supervisors to compare performance on every call across different contact centers, teams and agents. QA managers can then use an expanded dataset to gain additional insight and provide context to the agent about the entire conversation and point out where they can improve. Managers can also see how each agent’s score compares with the entire team or teams of agents.

Enlightening Insights

I called Tethr Chief Product and Research Officer Matt Dixon to learn how AIS compares with traditional forms of speech analytics. As background, Matt led the original Effortless Experience team at CEB (now Gartner) before joining Tethr in 2018, and has continued the research at Tethr, rolling learnings into Tethr’s products, including AIS.

To get things going, I mentioned to him that many of my clients use speech analytics to identify calls in which agents redirect customers to other agents, using phrases such as, “I can’t help you but maybe someone else can.” We agreed that passing the buck in this fashion can really irritate a customer.

But when I mentioned that some QA speech analytics users insist that putting a caller on hold also negatively impacts the customer experience, Matt pushed back, stating: “Scott, yes, these are outcomes best to avoid, but our data shows that many customers actually appreciate being placed on hold if it’s done in the right manner. It can also be warranted when paired with some degree of guidance or advocacy as a way to either circumvent the need for escalation or identify additional issues that might otherwise drive a call-back.”

Another misunderstood metric is overtalk. On the surface, most people, especially Midwesteners like me, will consider overtalk as a sign of combativeness or rudeness, but according to Matt, data has revealed that some degree of overtalk can show that an agent is fully engaged. “Overtalk can be viewed as an indicator of active conversation and—up to a certain point—can therefore be forgiven or preferred versus more passive behavior,” he said.

Results Speak Louder Than Words

For those of you who don’t know me, I’m a bit of a skeptic, so I asked Matt for some examples of how his solution has impacted his clients’ businesses.

  • A large hospitality company added AIS to complement their QA scorecard of traditional agent measures and quickly started to learn how agent performance changed when customers were frustrated or upset. These insights were immediately used to drive more constructive and results-driven coaching.
  • A nationally known retailer’s QA team identified agents who were exhibiting empathy most of the time but were confusing customers with their language techniques, netting lower AIS scores. Not coincidentally, customer frustration rose. Presented with this data, agents began to understand which combination of certain behaviors drive poor scores.
  • A Fortune 500 insurance company implemented AIS in its agent onboarding behavior practice because role-playing sessions just didn’t feel real enough. By reviewing their own AIS scores, new agents quickly began to understand how certain behaviors correlated to good and bad scores and adjusted their techniques, which resulted in measurable improvement.

Conclusion

As CX and customer service leaders, why do we measure call center agent performance? Is it to check subjective boxes? To improve the quality assurance scores of the reps? Shouldn’t agent performance be aligned directly to business outcomes?

QA scorecards have included the same list of unproven criteria for years. But new technologies that leverage actual data such as Tethr’s Agent Impact Score measures what’s actually driving sales conversions, customer churn reduction, upselling and positive word-of-mouth. Using these insights, QA can “show agents what good looks like” (or as we here at MainTrax say, “champagne moments”) to help them focus on the behaviors, skills and competencies that actually impact business outcomes.

If your metrics are not linked to actual quality, your business might be encountering serious but unknown customer experience issues. The customer now really does deserve a spot on the QA scorecard, and that requires modernization of the QA process.


Scott Bakken, Founder and President of MainTrax, is a highly respected independent voice in the speech analytics industry.

– Republished with permission from Contact Center Pipeline, http://www.contactcenterpipeline.com

Related Articles

Back to top button