Tuesday, March 23, 2010

Bad Survey Process Can Lead To Results Manipulation, Not Improved Satisfaction

Recently I was reading satisfaction survey results from Plan Sponsor Magazine, one of the "must read" periodicals in the Retirement Benefits industry. Every year, Plan Sponsor publishes an annual survey of customer satisfaction of providers of retirement service. Service Providers are given scores and "trophy" cups for top marks. These scores are highly publicized in the Retirement industry, with top scorers using them in their sales and marketing literature. Conversely, poor results have to be explained in sales presentations and client meetings.

All in all, a useful annual satisfaction benchmark.....right? Well, not exactly.

When one reviews the results, you are struck by the top scorers. They are not the firms one would expect, not the companies who are generally viewed by experts as the leaders in service. Also, in the categories one would expect certain companies to excel in, scores are lower than other competitors.

So what is going on here? Read about the methodology, and you can figure out what is happening. From Plan Sponsor's website (bold emphasis is mine):

Between late June and late August 2009, approximately 35,000 survey questionnaires were sent to defined contribution (DC) plan sponsors from the PLANSPONSOR magazine database, as well as to client lists supplied by DC providers; 5,635 total usable responses were received by the close of the survey on September 1, 2009.....Quartiles for participant services and sponsor services were calculated in each asset category in which a provider qualified for a rating. The score for participant services in each provider’s listing comprises the cumulative average of 13 categories, and sponsor services comprises 10 categories. The percentage score next to the quartile for each provider in each asset category represents the cumulative score out of 100%. For example, an average score of 6.82 out of a possible 7.00 for participant services in the small market would translate to 97.4%.

There are a couple of problems with this methodology, and an astute few companies who understand the process are taking advantage.

1. There is an appearance of randomness to the surveys but that is not the case. Surveys are sent out to both magazine subscribers (the names of which Plan Sponsor communicates to providers) and lists of other sponsors supplied by the providers themselves.

It is not hard to understand how this can be manipulated. By directly targeting communications to those providers receiving surveys, providers can somewhat sway results. Since the subscription base likely doesn't change much over the years, the pool of respondents is pretty much known.

Where the providers can truly impact the results is in providing Plan Sponsor survey names and addresses of clients. What clients do you think providers are supplying? Only happy clients, of course! And those clients can also be the target of communications alerting them to the survey, so they are more likely to respond.

Those providers that don't work the survey in this way are at a complete disadvantage. Those that do get better scores.

2. The aggregate category ratings, on a scale from 1 to 7, are then turned into a "score." This score is then ranked with other firms from high to low, and then categorized into quartiles. The major problem with this is that it does nothing to tell you how well the provider is satisfying the customer. If all scores are in the 90% satisfaction range, they will still be sorted into quartiles, giving the impression that the 3rd and 4th quartile are failing, when they are mostly satisfying.

3. When compiling overall scores, each category is weighed similarly. There is no attempt to determine the "value" of the category to the company or the industry as a whole. For example, if the key reason to select a provider is for their consulting capabilities, there is no way to give that category additional weight, resulting in an average score that may not represent the company's true feeling about their service provider.

It is telling that one of the "satisfaction leaders" in the survey is also a cost leader in the industry. This is counter-intuitive since the level of services provided is clearly not at the par of other providers. However, if companies don't expect premium service, they will be completely satisfied with adequate service, as long as it comes with a low pricetag. Think satisfaction of WalMart loyal customers versus Bloomingdales customers.

My View

Plan Sponsor Magazine surveys are very important in the Retirement industry. Yet the methodology is not tight enough to use as a measure of true satisfaction....cannot be viewed as Perfect Knowledge. Companies need to determine other ways to secure objective data that provide real insight; they need to understand what is important to their clients, measure how they are doing in those areas, and improve those results.

Is there an indictment of those firms who understand the Plan Sponsor process, and takes advantage of it to improve their standing? No way. They are smart companies who are seeking out any advantage they can get.

But are they also the companies who satisfy their customers best, as Plan Sponsor declares? There are enough holes in the process to clearly doubt that conclusion.

No comments:

Post a Comment