Skip to content

Getting Specific with Customer Satisfaction Survey Response Scales

survey response scales

 

All of us are familiar with the basic construction of survey response scales, even if we’re not aware that we are. They are prevalent in the customer satisfaction surveys we know so well these days, thanks (in part) to so many companies recognizing the power customers have today, and the desire to serve them better.

Many customer satisfaction surveys are designed the same way – a question followed by a list of response options. And while surveys may appear relatively simple to design, it is quite common for a survey to include flaws. One such flaw is when survey response scales are poorly designed.

For example, I’m sure most of you have encountered survey questions that we’re not sure how to answer – perhaps the list of possible responses to the question doesn’t include your preferred option or an answer that makes sense. Maybe something like the following:

During the last fiscal year, what percentage of your total advertising spend was specifically on digital display advertising?

  • 0%
  • 1% – 5%
  • 6% – 20%
  • 21% – 55%
  • 56% – 75%

How many people would be able to answer such a specific question? What if we wanted to ensure we got reliable, insightful data from such a survey question – what would we have to change?

UNDERSTANDING SURVEY RESPONSE SCALES

Before we can think about fixing the question above, first we should briefly review the different types of survey response scales.

There are essentially two types of scaled survey questions – unipolar and bipolar. Unipolar scales ask a respondent to measure an amount or degree of something. The scale typically starts with something like “0”, “None” or “Not at all” as the first option and a “maximum” for the last option (“100%”, “All the time”, “Everything”, etc.). The Net Promoter question is a well-known example of a unipolar scaled survey question.

Bipolar, however, includes scales that ask a respondent to select answers on either side of a “neutral” or “ambivalent” response. These scales are commonly used when measuring concepts like “satisfaction” or “agreement” – you can be “somewhat satisfied”, “extremely dissatisfied”, “neither satisfied nor dissatisfied”, etc.  In addition, bipolar scales are often either 5-point or 7-point (meaning there are 5 or 7 total responses to choose from). The odd number of choices ensures the scale is balanced or has an equal number of equivalent options on either side of the mid-point.

Another consideration is making sure the range of the response options makes sense to a respondent. Take the question above – why would we show an option of 1% – 5%, while the next option is 6% – 20%? Are these percent ranges the best response options to this question?

Another key point to consider is making sure to use an appropriate scale to assess the concept you’d like to measure. For example, it’s entirely possible to use an 11-point scale (such as the one used for the Net Promoter question) when posing questions about satisfaction. However, it may not be the best scale to use to measure satisfaction.

In fact, some research shows that it’s not easy for an individual to distinguish between a 7 and an 8 when, say, identifying how satisfied they were with a support experience. Concepts like “satisfaction” and “agreement” are usually best measured using a bipolar 5-point or 7-point scale, with responses ranging from “Extremely / Very dissatisfied” to “Extremely / Very satisfied”.

Additionally, it’s important to ensure the choice options in your scale are consistent with the sentiment you want to measure:

  • Are you striving to understand satisfaction with a specific attribute (very satisfied or very dissatisfied)?
  • Likelihood of the respondent doing something (extremely likely or unlikely?) A comparison of one thing to another (much better or somewhat worse)?
  • Agreement with a statement (strongly agree or disagree)?

Consider what would be the most actionable insight for you to glean and the best way to formulate the question to obtain that information. Then, use a scale that aligns with the question text.

Of course, there’s much more that goes into designing effective scaled survey questions, such as using correct question wording, the layout of the response options, etc. This goes doubly so for designing effective B2B customer satisfaction survey questions. However, your choice of which response scale to use can heavily impact how you go about phrasing your question and designing your response options.

 ‘SPECIFYING’ RESPONSE SCALES

Let’s assume you’ve chosen a unipolar scale to use for your question, like with the question about advertising spend above. The choice of a unipolar scale makes sense here – after all, it’s hard to think of a “neutral” response to a question asking about percentages.

But we should ask ourselves: are most people going to remember the exact percentage their business spent on digital display advertising? Will most people even remember a general percentage accurately? If you’re like most people, you’ll probably form a rough estimate – “About a third of what we spent was on digital display advertising” – and translate that into a percentage.

It’s reasonable to expect that most people will think this way when answering the question. In that case, why have a respondent take the extra step of translating their somewhat vague notion into a percentage? It might be easier for your respondents, and ultimately more accurate for you, to simply use general phrases to ask them how much of their advertising spend was on digital display advertising.

This process is known as specifying your survey response scale. The basic idea is that you don’t want to be too granular or too general with the response options you present to your respondents, and you want to give them the options that feel most natural to them so they can accurately answer your customer satisfaction survey questions. The more natural and the less confusing your response options, the better your respondents will be able to answer your survey question.

Re-writing the question along these lines, we might come up with:

During the last fiscal year, about what percentage of your total advertising spend was on digital display advertising?

  • None
  • Some / a small amount
  • About half
  • Most
  • All / close to all

By slightly rewriting the question and changing the response options from percentages to text, respondents will have a much easier time providing accurate feedback to the question. They will also be much less likely to simply skip the question entirely or drop off from the survey.

THE DEGREE OF SATISFACTION 

Let’s look at another example. Suppose you had the following question:

Are you satisfied or dissatisfied with your marketing automation software?

  • Satisfied
  • Dissatisfied
  • Neither satisfied nor dissatisfied

Although most respondents wouldn’t have any problem answering this question, you might be leaving something on the table by not asking for the degree of satisfaction or dissatisfaction with a respondent’s marketing automation software. What if a respondent is extremely frustrated with their software, or had such a good experience with the software that they’re now one of the company’s biggest fans?

Instead, we could rewrite the question to something like:

How satisfied or dissatisfied are you with your marketing automation software?

  • Very satisfied
  • Somewhat satisfied
  • Neither satisfied nor dissatisfied
  • Somewhat dissatisfied
  • Very dissatisfied

Getting a bit more fine-grained with our survey response options can help us determine just how satisfied or dissatisfied a respondent is. Here, we’re becoming more specific with our response options, while with the previous example we became less specific.

THE IMPORTANCE OF SPECIFIC RESPONSE OPTIONS

Designing scaled survey questions can be deceptively complicated. There are so many factors that go into correctly designing a survey question that it’s difficult to know if you’ve taken into consideration all the various ways that respondents might interpret and respond to a question.

Perhaps one of the least discussed aspects of designing effective B2B customer satisfaction survey questions is correctly specifying a response scale. However, when a response scale is correctly specified and easy for a respondent to understand and use, more accurate and reliable satisfaction and relationship data is inevitably the result.

For more survey best practices, download our practical guide on Net Promoter® survey design best practices.