I know you get asked to take customer feedback surveys all the time, because I do. Even the shortest business trip results in at least 5 surveys: Delta wants to know about your flight; Hilton wants to know about your stay; Enterprise asks about your car rental and on and on.
But the most prevalent of all surveys is that one at the bottom of your sales receipt, the request from Apple, Kohl’s, Nordstrom, Target and virtually all retailers to “tell us how we did.”
What We Did
So last fall, two of my analysts and I set out to measure the quality of those point-of-purchase surveys (Point-of-Purchase Survey Study). We thought it would be interesting to know what level of science and engagement the nation’s largest retailers bring to their surveys. The surveys say they want to know about our experiences as a customer, but do they really want to know? Or, is this just PR spin?
What We Found
Well, friends, unfortunately… it’s PR spin. The nation’s largest retailers run tragically poor customer feedback surveys, they’re bad for customers, bad for companies—they’re a waste of time and money all the way around.
So what are these big retailers, like Amazon, Apple, Wal-Mart, Kohl’s, and Target, doing wrong? Are there lessons that can be learned from their mistakes? And how can you make your survey better than some of the biggest companies in the world?
Two Big Problems
Let’s look at the two main problems:
- First, the vast majority of customer feedback surveys were riddled with biases which gives skewed data.
- And second, most of the surveys failed to show they care about their customers’ actual experiences.
What’s Wrong With Bias?
Let’s look at the problem of bias. There were five types of biases in these surveys, each negatively affecting data accuracy in different ways.
- Leading Questions— Known within psychology as priming, leading questions are designed to elicit a particular response. Ace Hardware asked: “How satisfied were you with the speed of our checkout?”This question is phrased in a way that assumes the customer is at least somewhat satisfied.
- Forced Wording—The Gap asked customers: “Rate your agreement with the following statement: The look and feel of the store environment was very appealing.”“Appealing” is a weird word. Customers are more likely to think “it’s a mess,” “that was fun,” or “it’s well-organized.” Furthermore, the question would seem to have an agenda behind it—as in Gap executives want to hear that their store environment was very appealing.
- Faulty Scales—Wal-Mart asked its questions on a 1-10 scale. This scale introduces two problems:
- First, there is an even number of selections and therefore no true midpoint:Selecting a 5 would imply a lower than neutral score, while selecting a 6 would imply a higher than neutral score.
- Second, there is no zero and some experiences are just that, zeroes, not sort of poor, plain old bad.
- Double-Barreled Questions—This is where one question asks about multiple topics, usually that’s two questions compressed into one. Lowe’s asked customers: “Were you consistently greeted and acknowledged in a genuine and friendly manner by Lowe’s associates throughout the store?”Here, we see four questions in one. Yikes! To improve, Lowe’s should instead divide this question into four, or even better, consider what they really want to know and devise a clearer way to ask it.
- Question Relevance—Ace, Gap, JC Penney, and O’Reilly Automotive all asked about their associate’s product knowledge (e.g. “Please rate your satisfaction with the employee’s knowledge of parts and products)—and none of these retailers offered the NA option. It’s likely that a large portion of shoppers didn’t ask a question of any associate and so would have no way of accurately providing customer feedback.
What Else We Found
On top of the myriad data accuracy issues, our Point-of-Purchase Survey Study showed that retailers have little regard for their customers.
For example, Walmart asked 4 introductory questions irrelevant to the customer’s experience, and required the input of 2 receipt codes. Really? That’s a hassle.
But the biggest, most consistent engagement mistake? Many of the customer feedback surveys were just too long—the average length was 23 questions. A survey should certainly never take longer than the interaction itself, in fact, it should take less time.
Family Dollar asked a whopping 69 questions in their survey—with 10 seconds a question that’s over ten minutes spent reflecting on items that cost a buck.
Designing a quality customer feedback survey is a process, requiring multiple edits to reach the best version. Throwing in every question is how NOT to design a survey. Think about what you want to know, and carefully craft your questions.
Set Clear Expectations
It’s also important to set expectations at the outset, communicating how long the survey will take, and then meeting that expectation. Nordstrom advertised their survey as 2 minutes, but with 25 questions it took closer to 5 minutes.
Most retailers didn’t provide any estimate of survey length, and instead simply let their customers click into the abyss.
3 Tips For Better Surveys
To execute a customer feedback survey that’s better than just about every major retailer, get serious about accuracy and engagement:
- Ensure your survey collects accurate and actionable data. Eliminate biases such as leading questions, forced wording, and faulty scales.
- Make every question clear and relevant to the customer.
- Show the customer that you respect and value their time by designing a survey that only asks what’s necessary and that states at the outset how long it will take.
If you follow even a few of the guidelines we’ve provided here, your survey will be leagues ahead of the biggest companies in the world.
We Can Help!
For additional hints about how to improve the quality of your customer feedback, get our Genius Tips. And if you’re interested in more about the first of its kind, Point-of-Purchase Survey Study, check out the 2-minute video or ask us for the complete report.