SolutionsTrends

GASP! There’s No GAAP For Survey Data

Fred Van Bennekom
Fred Van Bennekom

In both the public and private sectors, there’s a continually growing push to generate measures of service effectiveness to balance measures of efficiency. Measuring service effectiveness requires capturing input from our customers through surveys or other research methods. Yet interestingly, the financial academy, who are the official scorekeepers for our organizations, virtually ignores all customer- centric measures.

At a service academic conference I attended in August 2013, Professor Chris Hart presented his analysis of the major financial textbooks, showing that essentially none discussed customer measures. Finance examines numbers that indicate changes in cost structure (e.g., inventory turns), but they don’t track numbers that could be predictors of top-line revenue changes. Thus, customer feedback practitioners are free to develop their own practices since the traditional scorekeepers have abdicated any involvement.

This is good news coupled with bad news.

Financial scorekeepers are trained to following the accounting standards embodied in GAAP (Generally Accepted Accounting Practices). However, no standards for customer feedback or surveying practices exist—only conventions. And those conventions are very general without an understanding that varying research practices will affect the data collected. In my Survey Workshops, I am frequently asked, “What’s standard practice” for some aspect of surveying. My response, which is never warmly received, is that we have conventions, but not standards.

Why do we like standards? They’re liberating! They’re a set of prescribed rules to follow, rules developed for some good reason. The rules allow us to draw on someone else’s expertise, and we can’t be accused of bad practice if we follow some standard. It’s like the days when IT managers bought IBM hardware not because it was necessarily the best, but because no one would question the decision.

This lack of standards creates a danger zone. In discussion groups on LinkedIn and elsewhere, you’ll see calls for various survey measures like Net Promoter Score® to be “industry metrics,” and some survey metrics have taken on critical importance, such as the Health and Human Services (HHS) patient satisfaction survey that impacts hospital reimbursements from the U.S. Government. Yet I found that vital survey to have serious flaws in its design (see http://bit.ly/1f2r3kj).

But if we’re going to have industry metrics for customer effectiveness measures, then we must have standards to ensure apples-to-apples comparisons across organizations’ surveys and other research methods. Conventions simply won’t cut it as they lack universal application.

We have GAAP for financial data. The result is that revenue, expenses and profit are highly comparable across companies. (True, companies can massage financial data, but only so far, and the accounting methods practiced are scrutinized by market watchers.) But we don’t have GASP—Generally Accepted Survey Practices.

Yet there’s the belief that “survey says” The Truth, especially in the C-suite. Relative Truth perhaps, but not Absolute Truth. And there’s a belief that we can compare our customer satisfaction rating to published customer satisfaction ratings without even considering the differences in surveying practices.

No, I’m not recommending the creation of some new governmental regulatory body, even if I could be the reigning dictator on the committee. Heavens, no! But we who generate, analyze and apply survey data to organizational decisions should understand the shortcomings of customer feedback data, especially when doing cross-organization comparisons. Without that understanding, we can’t interpret and use the data appropriately.

A whole host of factors can—and will!—affect the survey scores provided by respondents. Some factors affect the scores that each respondent provides through what’s called instrumentation bias—the impact of the design of the questionnaire—while other factors affect who responds. The latter result from administration biases.

At the previously mentioned service conference, I presented a paper demonstrating the dramatic impact of survey mode— telephone versus webform—on the scores received. But that’s just one of a host of factors that can affect survey data. Here I’ll focus on survey data, but the general issue holds true for other customer effectiveness measurement techniques.

The following is a list of design and execution factors that can affect survey scores:

Questionnaire Design Practices

  • Question type used to generate the data
  • Anchor set used for scalar questions
  • Length of the scale
  • Direction of the scale
  • How the scale is presented to the respondent
  • Question wording
  • Length of survey instrument
  • Sequencing of questions and sections
  • Definition of key terms used in the questionnaire
  • Wording of the introduction
  • Wording of the instructions

Administrative Practices.

  • Target Population
  • Administration mode
  • Sampling procedure
  • Use of reminders
  • Use of incentives
  • Wording of invitations
  • Soliciting organization

Data Analysis

  • While this doesn’t affect the data that we get, the summary statistic used affects the legitimacy of any comparisons

Cross-organizational comparisons are in fact fraught with danger and while those comparisons are dangerous, what is valid is trending of feedback data over time within an organization. The continuity of survey practices likely eliminates all these confounding factors listed above. Comparisons across organizations are only valid if it’s the same instrument delivered with the same administrative practices. Without that, any comparisons based on survey data are spurious at best, misleading at worst.

Next time you hear some company or agency make a claim about a company’s customer satisfaction levels, don’t just accept it as Truth, ask for details about how they conduct their measurement program. If their survey practices differ from yours, be leery about making comparisons.

Fred will be running his “Survey Design & Data Analysis” workshop in Dubai from May 12-14, 2015. Visit http://www.insights-me.com/sdda for more details

– Reprinted with permission from Contact Center Pipeline, http://www.contactcenterpipeline.com 

Related Articles

Back to top button