The Commoditization of Satisfaction – Part 1

Posted on 17. Jun, 2012 by in Customer Satisfaction Measurement

I’m pleased that I and my consulting firm, Marketing Metrics participated in the very early development of customer satisfaction measurement.  I’m also pleased that my three books on the topic (Aftermarketing -1992 & 1995, Improving Your Measurement of Customer Satisfaction – 2001, and Customer Satisfaction Measurement Simplified – 2002) stayed relevant and in print for as long as they did.  So indulge me a few recollections on the way to a (hopefully) pertinent conclusion.

In the business climate of the late 1980’s and early 1990’s satisfaction surveys were an ‘event’…an all too infrequent invitation extended to customers asking them to appraise a manufacturer or a service provider of how the organization was doing.  The event was rare enough that a good proportion of customers would opt into a survey and would provide useful – even eye-opening – information and insights.

Why the Resistance to Satisfaction Measurement?

Mind you, the rarity of satisfaction measurement wasn’t entirely an advantage, except for the cooperation it elicited from customers.  On management’s side the irregularity of measuring satisfaction was pure oversight or the result of naïveté.  As a supplier of satisfaction surveys at the time, I can attest to the extreme effort it took to convince most management teams of that day, that customer satisfaction information would provide two distinct benefits.  First, the information would allow the organization to better align its services and products to the actual needs of its target consumers.  And second, the very act of asking would impress customers with the manufacturer’s commitment to better serving them.  In both cases, increased loyalty was a likely result.

Management’s reluctance to fund satisfaction surveys was born out of several concerns: the costs involved, the fear of being told exactly how well (or more likely how poorly) they were performing, the uncertainty of how to respond to the information, and the problem (or obligation) of how to deal with those responding customers who were upset or unhappy.  (Ironically most companies today, even those dedicated to customer satisfaction, still underestimate the value of showing customers their comments and opinions: matter, have been reviewed, and are appreciated.)

Changes and Progress

A lot has happened in the last two decades to mitigate most of these concerns.  First, and perhaps foremost, data collection by the Internet has become routine, eliminating yesterday’s substantial costs of personal interviewing by mail or telephone.  Today customers are comfortable with and experienced in using a computer to participate in opinion and research surveys.  The advent of “do it yourself” software has also facilitated fieldwork; numerous software packages exist whereby organizations can easily program and distribute a satisfaction questionnaire.  One consequence has been poorly planned and designed questionnaires.  Another has been the oversimplification of the scope and content of the conventional satisfaction survey.

No influence has been more to blame for this unfortunate simplification than industry’s rush to accept the Net Promoter ScoreTM.  The Promoter Score’s philosophy, that all one need do is to ask how likely a customer would be to recommend a business to others, has had a profound and not completely constructive influence on satisfaction measurement.  Debate on the Net Promoter Score continues.

Responding to Satisfaction Survey Findings

Concerns about whether or not – and how – to respond to satisfaction information are less wide-spread than in the past, but are still prevalent.  However, some organizations have begun to incorporate action planning processes into their satisfaction measurement program.  Ideally, such processes feed customers’ ratings into models that predict how responsive overall satisfaction will be to improvements in each measured element of performance.  With such models, improvement initiatives may be selectively directed at those elements, the improvement of which, will most efficiently increase overall satisfaction.  With such insight available, it is downright criminal how some management teams still blindly field surveys without any idea or plan of how to act on the insights they gain.

Without an action planning component, a satisfaction measurement program is, of course, nothing more than a sham.  Management deludes itself into thinking it is behaving responsibly (even though it may understand the process lacks closure); participating customers are duped into believing that the organization really cares and will improve itself.  Unfortunately both the customers and the organizations are losers because nothing comes from the opinions and suggestions collected.

I’m pleased to say that all of the programs my colleagues (at Marketing Metrics) and I created always included an action planning component.  Usually improvement priorities were distributed at the operational level – exactly where change needed to occur.  An additional step that our programs usually included was responding to each participating customer thanking them for their participation and usually indicating what actions the sponsoring company was prepared to take to improve conditions.  (If you’re concerned about the traditional marketing research maxim of not interacting with survey participants, see my article, Customer Research, Not Marketing Research.)

And so today’s ubiquity of satisfaction surveys sadly doesn’t necessarily mean that the practice has been perfected.  In my next installment, I’ll offer you 10 suggestions for a truly effective satisfaction measurement program.

Tags: , , ,

Leave a Reply