Thursday, July 11, 2013

Customer Satisfaction - Methodologies



Customer satisfaction, a term frequently used in marketing, is a measure of how products and services supplied by a company meet or surpass customer expectation. Customer satisfaction is defined as "the number of customers, or percentage of total customers, whose reported experience with a firm, its products, or its services (ratings) exceeds specified satisfaction goals." In a survey of nearly 200 senior marketing managers, 71 percent responded that they found a customer satisfaction metric very useful in managing and monitoring their businesses.

It is seen as a key performance indicator within business and is often part of a Balanced Scorecard. In a competitive marketplace where businesses compete for customers, customer satisfaction is seen as a key differentiator and increasingly has become a key element of business strategy.

"Within organizations, customer satisfaction ratings can have powerful effects. They focus employees on the importance of fulfilling customers’ expectations. Furthermore, when these ratings dip, they warn of problems that can affect sales and profitability. . . . These metrics quantify an important dynamic. When a brand has loyal customers, it gains positive word-of-mouth marketing, which is both free and highly effective."

Therefore, it is essential for businesses to effectively manage customer satisfaction. To be able do this, firms need reliable and representative measures of satisfaction.

"In researching satisfaction, firms generally ask customers whether their product or service has met or exceeded expectations. Thus, expectations are a key factor behind satisfaction. When customers have high expectations and the reality falls short, they will be disappointed and will likely rate their experience as less than satisfying. For this reason, a luxury resort, for example, might receive a lower satisfaction rating than a budget motel—even though its facilities and service would be deemed superior in 'absolute' terms."

The importance of customer satisfaction diminishes when a firm has increased bargaining power. For example, cell phone plan providers, such as AT&T and Verizon, participate in an industry that is an oligopoly, where only a few suppliers of a certain product or service exist. As such, many cell phone plan contracts have a lot of fine print with provisions that they would never get away if there were, say, a hundred cell phone plan providers, because customer satisfaction would be far too low, and customers would easily have the option of leaving for a better contract offer.



American Customer Satisfaction Index (ACSI) is a scientific standard of customer satisfaction. Academic research has shown that the national ACSI score is a strong predictor of Gross Domestic Product (GDP) growth, and an even stronger predictor of Personal Consumption Expenditure (PCE) growth. On the microeconomic level, academic studies have shown that ACSI data is related to a firm's financial performance in terms of return on investment (ROI), sales, long-term firm value (Tobin's q), cash flow, cash flow volatility, human capital performance, portfolio returns, debt financing, risk, and consumer spending. Increasing ACSI scores has been shown to predict loyalty, word-of-mouth recommendations, and purchase behavior. The ACSI measures customer satisfaction annually for more than 200 companies in 43 industries and 10 economic sectors. In addition to quarterly reports, the ACSI methodology can be applied to private sector companies and government agencies in order to improve loyalty and purchase intent. ASCI scores have also been calculated by independent researchers, for example, for the mobile phones sector, higher education, and electronic mail.

The Kano model is a theory of product development and customer satisfaction developed in the 1980s by Professor Noriaki Kano that classifies customer preferences into five categories: Attractive, One-Dimensional, Must-Be, Indifferent, Reverse. The Kano model offers some insight into the product attributes which are perceived to be important to customers.

SERVQUAL or RATER is a service-quality framework that has been incorporated into customer-satisfaction surveys (e.g., the revised Norwegian Customer Satisfaction Barometer) to indicate the gap between customer expectations and experience.

J.D. Power and Associates provides another measure of customer satisfaction, known for its top-box approach and automotive industry rankings. J.D. Power and Associates' marketing research consists primarily of consumer surveys and is publicly known for the value of its product awards.

Other research and consulting firms have customer satisfaction solutions as well. These include A.T. Kearney's Customer Satisfaction Audit process, which incorporates the Stages of Excellence framework and which helps define a company’s status against eight critically identified dimensions.

For B2B customer satisfaction surveys, where there is a small customer base, a high response rate to the survey is desirable. The American Customer Satisfaction Index (2012) found that response rates for paper-based surveys were around 10% and the response rates for e-surveys (web, wap and e-mail) were averaging between 5% and 15% - which can only provide a straw poll of the customers' opinions. One alternative was developed in 1989 by InfoQuest (the InfoQuest box)which has an average response rate of 70%+ (2012) based on posing up to 60 questions and statements.

In the European Union member states, many methods for measuring impact and satisfaction of e-government services are in use, which the eGovMoNet project sought to compare and harmonize.

These customer satisfaction methodologies have not been independently audited by the Marketing Accountability Standards Board (MASB) according to MMAP (Marketing Metric Audit Protocol).

2 comments:

  1. Secondly, in my travels I ran across quite a number of academic and research company generated articles which, though presenting another fairly broad range of results, seemed to average out to a reasonable expectation of something in the 10% to 15% range, and probably trending closer to the lower number. I was not able to locate a definitive voice of the industry on the matter, but will continue looking when or as time allows.



    One opinion that I did run into over and over again what that response rates to surveys in general – and of course most references on the web are to paper, internet or telephone – have ALL been declining over the past ten years. I find that imminently believable given our own history, which has been consistent with that trend. In the mid and late ’90’s, we consistently came in at 75% to 80%. By the mid to late ’00’s though, we dropped to closer to 70%, sometimes less. In the current decade we are so far running closer to 68%, and sometimes less.



    There has been plenty of teeth gnashing and navel gazing around here in recent years as we have repeatedly tried to figure out why we are no longer hitting the kinds of numbers we used to routinely enjoy. We’ve reviewed our operations, our validation procedures, the content of advance notification letters, the callers we use, anything we could think of that might be having an impact on response rates and, with rare exceptions, we found nothing. The simple truth seemed to be that what worked like a charm in 1995 is simply not working as well in 2015.



    There are two factors, however, that are difficult to escape. First, in 1995, customer satisfaction surveys were still a relatively new phenomenon. Companies and people were just starting to understand the value of surveys, and we had the clearly better mousetrap. In the intervening twenty years, however, everybody, and I mean EVERYBODY, has jumped onto the proverbial bandwagon. In 1995, surveys were an interesting novelty, an intriguing idea. Today they are everywhere. We are bombarded by them wherever we turn, often unable to avoid them, even when we’d prefer to. You can’t conduct business online, can’t buy something in a department store, can’t buy a light bulb at Home Depot without being asked to participate in a survey. It’s become a near glut, and like the trees in a forest, after a while you no longer even see them.



    The second factor is the growth of informational incompetence among our clients. In the early 90’s we dealt with generally small companies, often “mom and pop” operations who generally knew their customers pretty well. Today we are mainly dealing with multi-billion dollar, multi-national conglomerates who have decrepit CRM systems, who take every informational shortcut they can when assembling a customer list, and who consistently have us trying to validate former employees, former customers, non-decision makers, and the dearly departed. In other words, a big part of our problem is application of the theory of garbage in, garbage out.
    For me its been the Elephant in the Room for years. No-one talks about response rates and yet, particularly in the B2B arena where the typical organisation has only a few hundred customers, a good, high response rate is a key component to having feedback that is useful rather than being “interesting”.



    Now take a look at :- How to Increase Response Rates – 6 Useful Tips

    ReplyDelete
  2. Customer Satisfaction Survey Response Rates

    Over the past couple days I’ve been researching the general subject of Survey Response Rates. My normal interest in the subject became elevated when I ran across an article having to do with the ACSI (or American Customer Satisfaction Index).
    What caught my attention in terms of this email was the following quote –



    ———————————–

    “The American Customer Satisfaction Index found that response rates for paper-based surveys were around 10% and the response rates for e-surveys (web, wap and e-mail) were averaging between 5% and 15% – which can only provide a straw poll of the customers’ opinions.”

    ————————————



    While it’s not news that electronic survey response rates have been steadily eroding for the past twenty years, I was very surprised to read that, in at least some cases, they were now performing similarly to the long-maligned paper survey. After reading the statement, various additional questions sprang to mind; chief among them, is that response range indicative of the entire industry, or is it a product of something that ACSI is doing? I must have visited 75 web sites in a search for the answer to that question. The results were decidedly mixed.



    First off, of the thousands of web survey providers out there, I ran across quite a few claiming they had achieved some pretty lofty response rates. In support of supposedly “proprietary techniques”, I found companies claiming that they had achieved 30%, 50%, even 80% response rates. One company even claimed to have hit 100%, more than once. Most of those assertions, however, seemed to have caveats, both openly stated and inferred, attached to them; qualifiers like “a survey of a very small body of very closely intertwined customers”. In other words, many of the high response rates were probably based on having sent out ten survey invitations. After discounting those sort of claims, and after reading between the lines on other sites, it was clear that no-one anywhere was making claim that they can consistently hit numbers anywhere near those kind of totals. In fact, no-one anywhere seemed to make any kind of claims at all as to what they can consistently hit. No averages, no medians, no realistic expectations or long term histories of any kind.

    ReplyDelete