Best Practices: Response Rates and their Impact on Data Reliability
You’ve designed and launched your Customer Experience Measurement program but response rates across your chain and channels are sporadic, inconsistent and do not provide a level of statistical reliability you can count on to make important business decisions.
One of the most common questions we get asked pertains to response rates. What is a good response rate and how can it be improved? Well, the answer to the first question is “it depends”. In a consumer-facing, retail environment, the response rate itself is not always the most critical factor. The key objective is to attain a level of statistical reliability and comparability of results across your chain and over time. Ever wonder why political pollsters were able to predict the outcome of the 2008 US presidential election with accuracy for a county of 300 million with samples of less than 3,000? Generally speaking, the laws of statistics indicate all that is required to achieve statistical reliability is between 30 to 50 responses per reporting unit (or subgroup), for a given reporting period. You can use the following table as a general guideline when evaluating your sample size, depending on the type of analysis you want to conduct:
|Purpose||Minimum # of Respondents
(per unit of analysis)
|To generalize to your target population||30-50 respondents|
|To perform mean variance testing for cross-comparisons||30 respondents|
|To perform correlations for linkage analysis||20 respondents|
But first, what do we mean by “response rate” anyway? Well, there are two aspects to the response rate to consider: the initial sample and the final sample. The “initial sample” size is the number of customers you invite to participate in the survey and from whom you hope to obtain a response. The “final sample” size is the actual number of customers for which responses were received during the survey. The response rate is simply the percentage of customers included in the initial sample for which a usable response was received.
When determining a final sample size for survey research purposes, it is important to draw a large enough initial sample so that overall population attitudes and demographics are adequately represented by your final sample. So long as they do, the data would be reliable and predictable within a margin of error. Of course, larger is typically better, but beware—a larger final sample is only better when you have ensured that it matches the characteristics of your overall population.
Response Rates in a Retail CEM Environment
What are the implications of response rates for customer experience measurement programs in a retail and consumer-facing setting? In most retail environments, traffic count is high enough to achieve statistical reliability, even if the response rate is relatively low by comparative standards. Consider the following example:
A convenience store with 10,000 unique visitors per month on average: Assuming all customers are invited to participate in the survey, even a meager 0.05% response rate (which many would consider abysmally low) yielding only 50 responses, would actually provide a sufficient amount of data to achieve statistical reliability when examining results in the aggregate, so long as the final sample adequately reflects the characteristics of the overall population. However, the same response rate in an environment with half the traffic would not cross the threshold of statistical reliability.
Tips for Improving Response Rates
If your customer experience measurement program is not providing consistent response rates and statistically reliable data, here are some helpful tips to consider:
Sampling Approach: Is your initial sample large enough? If you’re using a random sampling method (for example: every “nth” customer, or every other day), you may not be casting a wide enough net. Although this is less of an issue in very high traffic environments, we often recommend using a census approach (i.e. invite all customers). Doing so provides a somewhat greater likelihood that the final sample will be reflective of your overall customer base.
Invitation Method: How is customer participation being solicited? In retail, the two most common methods are through formal invitation cards or by imprinting an invitation on the sales receipt. Invitation cards are generally more visible but do carry a cost. When sales receipts are issued, they can be cost-effective and reliable. In either case, it’s always good to get the support from store employees, who can inform customers about the survey further reinforcing visibility of the program.
Participation Methods: Are you providing the appropriate means to easily and conveniently allow your customers to give you feedback? While online surveys have become pervasive and cost-effective, it is not always the preferred method for some people, especially among older demographics. Consider using or combining the use of other participation methods such as Interactive Voice Response (IVR), SMS text messaging or even in-store kiosks. This ensures that customers, regardless of their age or socio-economic situation, can express their voice.
Prize Incentives: While this can be seen as a form of “bribery”, prizes are often an effective way to increase participation. Prizes can vary, but are most commonly in the form of a drawing for cash or a gift certificate. Both are easy to award and appeal to a broad range of respondents.
Communicate the Objectives: Inform customers on how they will benefit from participating in the survey, above and beyond simply winning a prize, and how your organization will put the findings into action. (Be sure to follow through on your promises.)
Ensure Anonymity & Confidentiality: In some situations, customers may feel uneasy about giving feedback. If respondents know their answers will not be linked to them in any way, they will be more likely to respond and more likely to provide truthful responses.
Keep it Brief: Far too often we see unnecessarily lengthy surveys which scare customers and lead to high drop-out rates. The challenge is to keep the survey as short as possible without compromising the information you need from it. Focus on the key “must have” experience metrics which are the critical success factors for your business and fight the temptation to ask everything under the sun. Also, tell people how much time the survey will take to complete so they know what to expect.
Set a Deadline: The longer customers wait, the likelihood to participate will decrease. Providing a fixed time frame (e.g. within 10 days) from the moment of experience can also motivate customers to respond.
Although higher participation rates are preferable and statistical reliability is a desirable goal, one should not lose sight of the fact that each customers’ voice counts! Even in situations where the threshold of reliability is not achieved, it is important to consider the opinions and comments of the few who have taken the time to provide feedback about their experience. In many situations their feedback can still provide indications of a problem in a specific location or highlight customer-specific issues which must be dealt with in a timely manner.