Last week at 8 PM, one of the top 4 car rental agencies called to survey me about my most recent experience renting from them. Since I was in a good mood—and I had a question for the company—I said ok.
Unfortunately, it was a waste of time, for me and them. We’ve written about survey biases, sampling errors, tired wording, and employee gaming, before. While some of these problems appeared in their questions, in this post, I’ll simply look at how their phone survey failed to listen.
Here’s what happened: The rental agency started by asking me if I would answer two or three questions.
Question #1: On a scale of 1-5, how satisfied was I with my most recent experience? I gave a 5; there were no problems, so I was satisfied.
Question #2: On a scale of 1-5, how likely was I to rent from them again? This time, I selected the middle option which was that I “might or might not rent from them again.”
The associate immediately said “I understand,” and that was that. She thanked me and ended the call.
Here’s how this survey failed at customer listening:
- The company interrupted my evening to ask quantitative questions that would have been much easier to answer in a web survey—at my leisure. I’m sure they are looking to boost response rates by using the phone, but there are better ways to get a robust sample.
- I had a question about billing that I wanted to ask the company, but there was no time—and no opportunity—for me to get anything out of this interaction.
The phone is a two-way street and, because of this, it’s an opportunity to create a genuine human connection with your customers. But in this case, the phone interaction was set up to be robotic and distancing. The company didn’t even ask me why I felt the way I did—which left me asking, why bother?
Could your survey be tuning customers out? Let’s talk! Send us your customer survey and we’ll examine 2-3 of the questions for biases and other flaws. In even a quick evaluation, we often find several survey flaws.